22 resultados para passing

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims at analysing Brian O'Nolans literary production in the light of a reconsideration of the role played by his two most famous pseudonyms ,Flann Brien and Myles na Gopaleen, behind which he was active both as a novelist and as a journalist. We tried to establish a new kind of relationship between them and their empirical author following recent cultural and scientific surveys in the field of Humour Studies, Psychology, and Sociology: taking as a starting point the appreciation of the comic attitude in nature and in cultural history, we progressed through a short history of laughter and derision, followed by an overview on humour theories. After having established such a frame, we considered an integration of scientific studies in the field of laughter and humour as a base for our study scheme, in order to come to a definition of the comic author as a recognised, powerful and authoritative social figure who acts as a critic of conventions. The history of laughter and comic we briefly summarized, based on the one related by the French scholar Georges Minois in his work (Minois 2004), has been taken into account in the view that humorous attitude is one of man’s characteristic traits always present and witnessed throughout the ages, though subject in most cases to repression by cultural and political conservative power. This sort of Super-Ego notwithstanding, or perhaps because of that, comic impulse proved irreducible exactly in its influence on the current cultural debates. Basing mainly on Robert R. Provine’s (Provine 2001), Fabio Ceccarelli’s (Ceccarelli 1988), Arthur Koestler’s (Koestler 1975) and Peter L. Berger’s (Berger 1995) scientific essays on the actual occurrence of laughter and smile in complex social situations, we underlined the many evidences for how the use of comic, humour and wit (in a Freudian sense) could be best comprehended if seen as a common mind process designed for the improvement of knowledge, in which we traced a strict relation with the play-element the Dutch historian Huizinga highlighted in his famous essay, Homo Ludens (Huizinga 1955). We considered comic and humour/wit as different sides of the same coin, and showed how the demonstrations scientists provided on this particular subject are not conclusive, given that the mental processes could not still be irrefutably shown to be separated as regards graduations in comic expression and reception: in fact, different outputs in expressions might lead back to one and the same production process, following the general ‘Economy Rule’ of evolution; man is the only animal who lies, meaning with this that one feeling is not necessarily biuniquely associated with one and the same outward display, so human expressions are not validation proofs for feelings. Considering societies, we found that in nature they are all organized in more or less the same way, that is, in élites who govern over a community who, in turn, recognizes them as legitimate delegates for that task; we inferred from this the epistemological possibility for the existence of an added ruling figure alongside those political and religious: this figure being the comic, who is the person in charge of expressing true feelings towards given subjects of contention. Any community owns one, and his very peculiar status is validated by the fact that his place is within the community, living in it and speaking to it, but at the same time is outside it in the sense that his action focuses mainly on shedding light on ideas and objects placed out-side the boundaries of social convention: taboos, fears, sacred objects and finally culture are the favourite targets of the comic person’s arrow. This is the reason for the word a(rche)typical as applied to the comic figure in society: atypical in a sense, because unconventional and disrespectful of traditions, critical and never at ease with unblinkered respect of canons; archetypical, because the “village fool”, buffoon, jester or anyone in any kind of society who plays such roles, is an archetype in the Jungian sense, i.e. a personification of an irreducible side of human nature that everybody instinctively knows: a beginner of a tradition, the perfect type, what is most conventional of all and therefore the exact opposite of an atypical. There is an intrinsic necessity, we think, of such figures in societies, just like politicians and priests, who should play an elitist role in order to guide and rule not for their own benefit but for the good of the community. We are not naïve and do know that actual owners of power always tend to keep it indefinitely: the ‘social comic’ as a role of power has nonetheless the distinctive feature of being the only job whose tension is not towards stability. It has got in itself the rewarding permission of contradiction, for the very reason we exposed before that the comic must cast an eye both inside and outside society and his vision may be perforce not consistent, then it is satisfactory for the popularity that gives amongst readers and audience. Finally, the difference between governors, priests and comic figures is the seriousness of the first two (fundamentally monologic) and the merry contradiction of the third (essentially dialogic). MPs, mayors, bishops and pastors should always console, comfort and soothe popular mood in respect of the public convention; the comic has the opposite task of provoking, urging and irritating, accomplishing at the same time a sort of control of the soothing powers of society, keepers of the righteousness. In this view, the comic person assumes a paramount importance in the counterbalancing of power administration, whether in form of acting in public places or in written pieces which could circulate for private reading. At this point comes into question our Irish writer Brian O'Nolan(1911-1966), real name that stood behind the more famous masks of Flann O'Brien, novelist, author of At Swim-Two-Birds (1939), The Hard Life (1961), The Dalkey Archive (1964) and, posthumously, The Third Policeman (1967); and of Myles na Gopaleen, journalist, keeper for more than 25 years of the Cruiskeen Lawn column on The Irish Times (1940-1966), and author of the famous book-parody in Irish An Béal Bocht (1941), later translated in English as The Poor Mouth (1973). Brian O'Nolan, professional senior civil servant of the Republic, has never seen recognized his authorship in literary studies, since all of them concentrated on his alter egos Flann, Myles and some others he used for minor contributions. So far as we are concerned, we think this is the first study which places the real name in the title, this way acknowledging him an unity of intents that no-one before did. And this choice in titling is not a mere mark of distinction for the sake of it, but also a wilful sign of how his opus should now be reconsidered. In effect, the aim of this study is exactly that of demonstrating how the empirical author Brian O'Nolan was the real Deus in machina, the master of puppets who skilfully directed all of his identities in planned directions, so as to completely fulfil the role of the comic figure we explained before. Flann O'Brien and Myles na Gopaleen were personae and not persons, but the impression one gets from the critical studies on them is the exact opposite. Literary consideration, that came only after O'Nolans death, began with Anne Clissmann’s work, Flann O'Brien: A Critical Introduction to His Writings (Clissmann 1975), while the most recent book is Keith Donohue’s The Irish Anatomist: A Study of Flann O'Brien (Donohue 2002); passing through M.Keith Booker’s Flann O'Brien, Bakhtin and Menippean Satire (Booker 1995), Keith Hopper’s Flann O'Brien: A Portrait of the Artist as a Young Post-Modernist (Hopper 1995) and Monique Gallagher’s Flann O'Brien, Myles et les autres (Gallagher 1998). There have also been a couple of biographies, which incidentally somehow try to explain critical points his literary production, while many critical studies do the same on the opposite side, trying to found critical points of view on the author’s restless life and habits. At this stage, we attempted to merge into O'Nolan's corpus the journalistic articles he wrote, more than 4,200, for roughly two million words in the 26-year-old running of the column. To justify this, we appealed to several considerations about the figure O'Nolan used as writer: Myles na Gopaleen (later simplified in na Gopaleen), who was the equivalent of the street artist or storyteller, speaking to his imaginary public and trying to involve it in his stories, quarrels and debates of all kinds. First of all, he relied much on language for the reactions he would obtain, playing on, and with, words so as to ironically unmask untrue relationships between words and things. Secondly, he pushed to the limit the convention of addressing to spectators and listeners usually employed in live performing, stretching its role in the written discourse to come to a greater effect of involvement of readers. Lastly, he profited much from what we labelled his “specific weight”, i.e. the potential influence in society given by his recognised authority in determined matters, a position from which he could launch deeper attacks on conventional beliefs, so complying with the duty of a comic we hypothesised before: that of criticising society even in threat of losing the benefits the post guarantees. That seemingly masochistic tendency has its rationale. Every representative has many privileges on the assumption that he, or she, has great responsibilities in administrating. The higher those responsibilities are, the higher is the reward but also the severer is the punishment for the misfits done while in charge. But we all know that not everybody accepts the rules and many try to use their power for their personal benefit and do not want to undergo law’s penalties. The comic, showing in this case more civic sense than others, helped very much in this by the non-accessibility to the use of public force, finds in the role of the scapegoat the right accomplishment of his task, accepting the punishment when his breaking of the conventions is too stark to be forgiven. As Ceccarelli demonstrated, the role of the object of laughter (comic, ridicule) has its very own positive side: there is freedom of expression for the person, and at the same time integration in the society, even though at low levels. Then the banishment of a ‘social’ comic can never get to total extirpation from society, revealing how the scope of the comic lies on an entirely fictional layer, bearing no relation with facts, nor real consequences in terms of physical health. Myles na Gopaleen, mastering these three characteristics we postulated in the highest way, can be considered an author worth noting; and the oeuvre he wrote, the whole collection of Cruiskeen Lawn articles, is rightfully a novel because respects the canons of it especially regarding the authorial figure and his relationship with the readers. In addition, his work can be studied even if we cannot conduct our research on the whole of it, this proceeding being justified exactly because of the resemblances to the real figure of the storyteller: its ‘chapters’ —the daily articles— had a format that even the distracted reader could follow, even one who did not read each and every article before. So we can critically consider also a good part of them, as collected in the seven volumes published so far, with the addition of some others outside the collections, because completeness in this case is not at all a guarantee of a better precision in the assessment; on the contrary: examination of the totality of articles might let us consider him as a person and not a persona. Once cleared these points, we proceeded further in considering tout court the works of Brian O'Nolan as the works of a unique author, rather than complicating the references with many names which are none other than well-wrought sides of the same personality. By putting O'Nolan as the correct object of our research, empirical author of the works of the personae Flann O'Brien and Myles na Gopaleen, there comes out a clearer literary landscape: the comic author Brian O'Nolan, self-conscious of his paramount role in society as both a guide and a scourge, in a word as an a(rche)typical, intentionally chose to differentiate his personalities so as to create different perspectives in different fields of knowledge by using, in addition, different means of communication: novels and journalism. We finally compared the newly assessed author Brian O'Nolan with other great Irish comic writers in English, such as James Joyce (the one everybody named as the master in the field), Samuel Beckett, and Jonathan Swift. This comparison showed once more how O'Nolan is in no way inferior to these authors who, greatly celebrated by critics, have nonetheless failed to achieve that great public recognition O’Nolan received alias Myles, awarded by the daily audience he reached and influenced with his Cruiskeen Lawn column. For this reason, we believe him to be representative of the comic figure’s function as a social regulator and as a builder of solidarity, such as that Raymond Williams spoke of in his work (Williams 1982), with in mind the aim of building a ‘culture in common’. There is no way for a ‘culture in common’ to be acquired if we do not accept the fact that even the most functional society rests on conventions, and in a world more and more ‘connected’ we need someone to help everybody negotiate with different cultures and persons. The comic gives us a worldly perspective which is at the same time comfortable and distressing but in the end not harmful as the one furnished by politicians could be: he lets us peep into parallel worlds without moving too far from our armchair and, as a consequence, is the one who does his best for the improvement of our understanding of things.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Composite porcelain enamels are inorganic coatings for metallic components based on a special ceramic-vitreous matrix in which specific additives are randomly dispersed. The ceramic-vitreous matrix is made by a mixture of various raw materials and elements and in particular it is based on boron-silicate glass added with metal oxides(1) of titanium, zinc, tin, zirconia, alumina, ecc. These additions are often used to improve and enhance some important performances such as corrosion(2) and wear resistance, mechanical strength, fracture toughness and also aesthetic functions. The coating process, called enamelling, depends on the nature of the surface, but also on the kind of the used porcelain enamel. For metal sheets coatings two industrial processes are actually used: one based on a wet porcelain enamel and another based on a dry-silicone porcelain enamel. During the firing process, that is performed at about 870°C in the case of a steel substrate, the enamel raw material melts and interacts with the metal substrate so enabling the formation of a continuous varying structure. The interface domain between the substrate and the external layer is made of a complex material system where the ceramic vitreous and the metal constituents are mixed. In particular four main regions can be identified, (i) the pure metal region, (ii) the region where the metal constituents are dominant compared with the ceramic vitreous components, (iii) the region where the ceramic vitreous constituents are dominant compared with the metal ones, and the fourth region (iv) composed by the pure ceramic vitreous material. It has also to be noticed the presence of metallic dendrites that hinder the substrate and the external layer passing through the interphase region. Each region of the final composite structure plays a specific role: the metal substrate has mainly the structural function, the interphase region and the embedded dendrites guarantee the adhesion of the external vitreous layer to the substrate and the external vitreous layer is characterized by an high tribological, corrosion and thermal shock resistance. Such material, due to its internal composition, functionalization and architecture can be considered as a functionally graded composite material. The knowledge of the mechanical, tribological and chemical behavior of such composites is not well established and the research is still in progress. In particular the mechanical performances data about the composite coating are not jet established. In the present work the Residual Stresses, the Young modulus and the First Crack Failure of the composite porcelain enamel coating are studied. Due to the differences of the porcelain composite enamel and steel thermal properties the enamelled steel sheets have residual stresses: compressive residual stress acts on the coating and tensile residual stress acts on the steel sheet. The residual stresses estimation has been performed by measuring the curvature of rectangular one-side coated specimens. The Young modulus and the First Crack Failure (FCF) of the coating have been estimated by four point bending tests (3-7) monitored by means of the Acoustic Emission (AE) technique(5,6). In particular the AE information has been used to identify, during the bending tests, the displacement domain over which no coating failure occurs (Free Failure Zone, FFZ). In the FFZ domain, the Young modulus has been estimated according to ASTM D6272-02. The FCF has been calculated as the ratio between the displacement at the first crack of the coating and the coating thickness on the cracked side. The mechanical performances of the tested coated specimens have also been related and discussed to respective microstructure and surface characteristics by double entry charts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

«In altri termini mi sfuggiva e ancora oggi mi sfugge gran parte del significato dell’evoluzione del tempo; come se il tempo fosse una materia che osservo dall’esterno. Questa mancanza di evoluzione è fonte di alcune mie sventure ma anche mi appartiene con gioia.» Aldo Rossi, Autobiografia scientifica. The temporal dimension underpinning the draft of Autobiografia scientifica by Aldo Rossi may be referred to what Lucien Lévy-Bruhl, the well-known French anthropologist, defines as “primitive mentality” and “prelogical” conscience : the book of life has lost its page numbers, even punctuation. For Lévy-Bruhl, but certainly for Rossi, life or its summing up becomes a continuous account of ellipses, gaps, repetitions that may be read from left to right or viceversa, from head to foot or viceversa without distinction. Rossi’s autobiographical writing seems to accept and support the confusion with which memories have been collected, recording them after the order memory gives them in the mental distillation or simply according to the chronological order in which they have happened. For Rossi, the confusion reflects the melting of memory elements into a composite image which is the result of a fusion. He is aware that the same sap pervades all memories he is going to put in order: each of them has got a common denominator. Differences have diminished, almost faded; the quick glance is prevalent over the distinction of each episode. Rossi’s writing is beyond the categories dependent on time: past and present, before and now. For Rossi, the only repetition – the repetition the text will make possible for an indefinite number of times – gives peculiarity to the event. As Gilles Deleuze knows, “things” may only last as “singleness”: more frequent the repetition is, more singular is the memory phenomenon that recurs, because only what is singular magnifies itself and happens endlessly forever. Rossi understands that “to raise the first time to nth forever”, repetition becomes glorification . It may be an autobiography that, celebrating the originality, enhances the memory event in the repetition; in fact it greatly differs from the biographical reproduction, in which each repetition is but a weaker echo, a duller copy, provided with a smaller an smaller power in comparison with the original. Paradoxically, for Deleuze the repetition asserts the originality and singularity of what is repeated. Rossi seems to share the thought expressed by Kierkegaard in the essay Repetition: «The hope is a graceful maiden slipping through your fingers; the memory of an elderly woman, indeed pretty, but never satisfactory if necessary; the repetition is a loved friend you are never tired of, as it is only the new to make you bored. The old never bores you and its presence makes you happy [...] life is but a repetition [...] here is the beauty of life» . Rossi knows well that repetition hints at the lasting stability of cosmic time. Kierkegaard goes on: «The world exists, and it exists as a repetition» . Rossi devotes himself, on purpose and in all conscience, to collect, to inventory and «to review life», his own life, according to a recovery not from the past but of the past: a search work, the «recherche du temps perdu», as Proust entitled his masterpiece on memory. If you want the past time to be not wasted, you must give it presence. «Memoria e specifico come caratteristiche per riconoscere se stesso e ciò che è estraneo mi sembravano le più chiare condizioni e spiegazioni della realtà. Non esiste uno specifico senza memoria, e una memoria che non provenga da un momento specifico; e solo questa unione permette la conoscenza della propria individualità e del contrario (self e non-self)» . Rossi wants to understand himself, his own character; it is really his own character that requires to be understood, to increase its own introspective ability and intelligence. «Può sembrare strano che Planck e Dante associno la loro ricerca scientifica e autobiografica con la morte; una morte che è in qualche modo continuazione di energia. In realtà, in ogni artista o tecnico, il principio della continuazione dell’energia si mescola con la ricerca della felicità e della morte» . The eschatological incipit of Rossi’s autobiography refers to Freud’s thought in the exact circularity of Dante’s framework and in as much exact circularity of the statement of the principle of the conservation of energy: in fact it was Freud to connect repetition to death. For Freud, the desire of repetition is an instinct rooted in biology. The primary aim of such an instinct would be to restore a previous condition, so that the repeated history represents a part of the past (even if concealed) and, relieving the removal, reduces anguish and tension. So, Freud ask himself, what is the most remote state to which the instinct, through the repetition, wants to go back? It is a pre-vital condition, inorganic of the pure entropy, a not-to-be condition in which doesn’t exist any tension; in other words, Death. Rossi, with the theme of death, introduces the theme of circularity which further on refers to the sense of continuity in transformation or, in the opposite way, the transformation in continuity. «[...] la descrizione e il rilievo delle forme antiche permettevano una continuità altrimenti irripetibile, permettevano anche una trasformazione, una volta che la vita fosse fermata in forme precise» . Rossi’s attitude seems to hint at the reflection on time and – in a broad sense – at the thought on life and things expressed by T.S. Eliot in Four Quartets: «Time present and time past / Are both perhaps present in time future, / And time future is contained in time past. / I all time is eternally present / All time is unredeemable. / What might have been is an abstraction / Remaining perpetual possibility / Only in a word of speculation. / What might have been and what has been / Point to one end, which is always present. [...]» . Aldo Rossi’s autobiographical story coincides with the description of “things” and the description of himself through the things in the exact parallel with craft or art. He seems to get all things made by man to coincide with the personal or artistic story, with the consequent immediate necessity of formulating a new interpretation: the flow of things has never met a total stop; all that exists nowadays is but a repetition or a variant of something existing some time ago and so on, without any interruption until the early dawnings of human life. Nevertheless, Rossi must operate specific subdivisions inside the continuous connection in time – of his time – even if limited by a present beginning and end of his own existence. This artist, as an “historian” of himself and his own life – as an auto-biographer – enjoys the privilege to be able to decide if and how to operate the cutting in a certain point rather than in another one, without being compelled to justify his choice. In this sense, his story is a matter very ductile and flexible: a good story-teller can choose any moment to start a certain sequence of events. Yet, Rossi is aware that, beyond the mere narration, there is the problem to identify in history - his own personal story – those flakings where a clean cut enables the separation of events of different nature. In order to do it, he has to make not only an inventory of his own “things”, but also to appeal to authority of the Divina Commedia started by Dante when he was 30. «A trent’anni si deve compiere o iniziare qualcosa di definitivo e fare i conti con la propria formazione» . For Rossi, the poet performs his authority not only in the text, but also in his will of setting out on a mystical journey and handing it down through an exact descriptive will. Rossi turns not only to the authority of poetry, but also evokes the authority of science with Max Plank and his Scientific Autobiography, published, in Italian translation, by Einaudi, 1956. Concerning Planck, Rossi resumes an element seemingly secondary in hit account where the German physicist «[...] risale alle scoperte della fisica moderna ritrovando l’impressione che gli fece l’enunciazione del principio di conservazione dell’energia; [...]» . It is again the act of describing that links Rossi to Planck, it is the description of a circularity, the one of conservation of energy, which endorses Rossi’s autobiographical speech looking for both happiness and death. Rossi seems to agree perfectly to the thought of Planck at the opening of his own autobiography: «The decision to devote myself to science was a direct consequence of a discovery which was never ceased to arouse my enthusiasm since my early youth: the laws of human thought coincide with the ones governing the sequences of the impressions we receive from the world surrounding us, so that the mere logic can enable us to penetrate into the latter one’s mechanism. It is essential that the outer world is something independent of man, something absolute. The search of the laws dealing with this absolute seems to me the highest scientific aim in life» . For Rossi the survey of his own life represents a way to change the events into experiences, to concentrate the emotion and group them in meaningful plots: «It seems, as one becomes older. / That the past has another pattern, and ceases to be a mere sequence [...]» Eliot wrote in Four Quartet, which are a meditation on time, old age and memory . And he goes on: «We had the experience but missed the meaning, / And approach to the meaning restores the experience / In a different form, beyond any meaning [...]» . Rossi restores in his autobiography – but not only in it – the most ancient sense of memory, aware that for at least 15 centuries the Latin word memoria was used to show the activity of bringing back images to mind: the psychology of memory, which starts with Aristotele (De Anima), used to consider such a faculty totally essential to mind. Keith Basso writes: «The thought materializes in the form of “images”» . Rossi knows well – as Aristotele said – that if you do not have a collection of mental images to remember – imagination – there is no thought at all. According to this psychological tradition, what today we conventionally call “memory” is but a way of imagining created by time. Rossi, entering consciously this stream of thought, passing through the Renaissance ars memoriae to reach us gives a great importance to the word and assumes it as a real place, much more than a recollection, even more than a production and an emotional elaboration of images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oggetto della ricerca sono l’esame e la valutazione dei limiti posti all’autonomia privata dal divieto di abuso della posizione dominante, come sancito, in materia di tutela della concorrenza, dall’art. 3 della legge 10 ottobre 1990, n. 287, a sua volta modellato sull’art. 82 del Trattato CE. Preliminarmente, si è ritenuto opportuno svolgere la ricognizione degli interessi tutelati dal diritto della concorrenza, onde individuare la cerchia dei soggetti legittimati ad avvalersi dell’apparato di rimedi civilistici – invero scarno e necessitante di integrazione in via interpretativa – contemplato dall’art. 33 della legge n. 287/1990. È così emerso come l’odierno diritto della concorrenza, basato su un modello di workable competition, non possa ritenersi sorretto da ragioni corporative di tutela dei soli imprenditori concorrenti, investendo direttamente – e rivestendo di rilevanza giuridica – le situazioni soggettive di coloro che operano sul mercato, indipendentemente da qualificazioni formali. In tal senso, sono stati esaminati i caratteri fondamentali dell’istituto dell’abuso di posizione dominante, come delineatisi nella prassi applicativa non solo degli organi nazionali, ma anche di quelli comunitari. Ed invero, un aspetto importante che caratterizza la disciplina italiana dell’abuso di posizione dominante e della concorrenza in generale, distinguendola dalle normative di altri sistemi giuridici prossimi al nostro, è costituito dal vincolo di dipendenza dal diritto comunitario, sancito dall’art. 1, quarto comma, della legge n. 287/1990, idoneo a determinare peculiari riflessi anche sul piano dell’applicazione civilistica dell’istituto. La ricerca si è quindi spostata sulla figura generale del divieto di abuso del diritto, onde vagliarne i possibili rapporti con l’istituto in esame. A tal proposito, si è tentato di individuare, per quanto possibile, i tratti essenziali della figura dell’abuso del diritto relativamente all’esercizio dell’autonomia privata in ambito negoziale, con particolare riferimento all’evoluzione del pensiero della dottrina e ai più recenti orientamenti giurisprudenziali sul tema, che hanno valorizzato il ruolo della buona fede intesa in senso oggettivo. Particolarmente interessante è parsa la possibilità di estendere i confini della figura dell’abuso del diritto sì da ricomprendere anche l’esercizio di prerogative individuali diverse dai diritti soggettivi. Da tale estensione potrebbero infatti discendere interessanti ripercussioni per la tutela dei soggetti deboli nel contesto dei rapporti d’impresa, intendendosi per tali tanto i rapporti tra imprenditori in posizione paritaria o asimmetrica, quanto i rapporti tra imprenditori e consumatori. È stato inoltre preso in considerazione l’aspetto dei rimedi avverso le condotte abusive, alla luce dei moderni contributi sull’eccezione di dolo generale, sulla tutela risarcitoria e sull’invalidità negoziale, con i quali è opportuno confrontarsi qualora si intenda cercare di colmare – come sembra opportuno – i vuoti di disciplina della tutela civilistica avverso l’abuso di posizione dominante. Stante l’evidente contiguità con la figura in esame, si è poi provveduto ad esaminare, per quanto sinteticamente, il divieto di abuso di dipendenza economica, il quale si delinea come figura ibrida, a metà strada tra il diritto dei contratti e quello della concorrenza. Tale fattispecie, pur inserita in una legge volta a disciplinare il settore della subfornitura industriale (art. 9, legge 18 giugno 1998, n. 192), ha suscitato un vasto interessamento della dottrina. Si sono infatti levate diverse voci favorevoli a riconoscere la portata applicativa generale del divieto, quale principio di giustizia contrattuale valevole per tutti i rapporti tra imprenditori. Nel tentativo di verificare tale assunto, si è cercato di individuare la ratio sottesa all’art. 9 della legge n. 192/1998, anche in considerazione dei suoi rapporti con il divieto di abuso di posizione dominante. Su tale aspetto è d’altronde appositamente intervenuto il legislatore con la legge 5 marzo 2001, n. 57, riconoscendo la competenza dell’Autorità garante per la concorrenza ed il mercato a provvedere, anche d’ufficio, sugli abusi di dipendenza economica con rilevanza concorrenziale. Si possono così prospettare due fattispecie normative di abusi di dipendenza economica, quella con effetti circoscritti al singolo rapporto interimprenditoriale, la cui disciplina è rimessa al diritto civile, e quella con effetti negativi per il mercato, soggetta anche – ma non solo – alle regole del diritto antitrust; tracciare una netta linea di demarcazione tra i reciproci ambiti non appare comunque agevole. Sono stati inoltre dedicati brevi cenni ai rimedi avverso le condotte di abuso di dipendenza economica, i quali involgono problematiche non dissimili a quelle che si delineano per il divieto di abuso di posizione dominante. Poste tali basi, la ricerca è proseguita con la ricognizione dei rimedi civilistici esperibili contro gli abusi di posizione dominante. Anzitutto, è stato preso in considerazione il rimedio del risarcimento dei danni, partendo dall’individuazione della fonte della responsabilità dell’abutente e vagliando criticamente le diverse ipotesi proposte in dottrina, anche con riferimento alle recenti elaborazioni in tema di obblighi di protezione. È stata altresì vagliata l’ammissibilità di una visione unitaria degli illeciti in questione, quali fattispecie plurioffensive e indipendenti dalla qualifica formale del soggetto leso, sia esso imprenditore concorrente, distributore o intermediario – o meglio, in generale, imprenditore complementare – oppure consumatore. L’individuazione della disciplina applicabile alle azioni risarcitorie sembra comunque dipendere in ampia misura dalla risposta al quesito preliminare sulla natura – extracontrattuale, precontrattuale ovvero contrattuale – della responsabilità conseguente alla violazione del divieto. Pur non sembrando prospettabili soluzioni di carattere universale, sono apparsi meritevoli di approfondimento i seguenti profili: quanto all’individuazione dei soggetti legittimati, il problema della traslazione del danno, o passing-on; quanto al nesso causale, il criterio da utilizzare per il relativo accertamento, l’ammissibilità di prove presuntive e l’efficacia dei provvedimenti amministrativi sanzionatori; quanto all’elemento soggettivo, la possibilità di applicare analogicamente l’art. 2600 c.c. e gli aspetti collegati alla colpa per inosservanza di norme di condotta; quanto ai danni risarcibili, i criteri di accertamento e di prova del pregiudizio; infine, quanto al termine di prescrizione, la possibilità di qualificare il danno da illecito antitrust quale danno “lungolatente”, con le relative conseguenze sull’individuazione del dies a quo di decorrenza del termine prescrizionale. In secondo luogo, è stata esaminata la questione della sorte dei contratti posti in essere in violazione del divieto di abuso di posizione dominante. In particolare, ci si è interrogati sulla possibilità di configurare – in assenza di indicazioni normative – la nullità “virtuale” di detti contratti, anche a fronte della recente conferma giunta dalla Suprema Corte circa la distinzione tra regole di comportamento e regole di validità del contratto. È stata inoltre esaminata – e valutata in senso negativo – la possibilità di qualificare la nullità in parola quale nullità “di protezione”, con una ricognizione, per quanto sintetica, dei principali aspetti attinenti alla legittimazione ad agire, alla rilevabilità d’ufficio e all’estensione dell’invalidità. Sono poi state dedicate alcune considerazioni alla nota questione della sorte dei contratti posti “a valle” di condotte abusive, per i quali non sembra agevole configurare declaratorie di nullità, mentre appare prospettabile – e, anzi, preferibile – il ricorso alla tutela risarcitoria. Da ultimo, non si è trascurata la valutazione dell’esperibilità, avverso le condotte di abuso di posizione dominante, di azioni diverse da quelle di nullità e risarcimento, le sole espressamente contemplate dall’art. 33, secondo comma, della legge n. 287/1990. Segnatamente, l’attenzione si è concentrata sulla possibilità di imporre a carico dell’impresa in posizione dominante un obbligo a contrarre a condizioni eque e non discriminatorie. L’importanza del tema è attestata non solo dalla discordanza delle pronunce giurisprudenziali, peraltro numericamente scarse, ma anche dal vasto dibattito dottrinale da tempo sviluppatosi, che investe tuttora taluni aspetti salienti del diritto delle obbligazioni e della tutela apprestata dall’ordinamento alla libertà di iniziativa economica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sperm cells need hexoses as a substrate for their function, for both the maintenance of membrane homeostasis and the movement of the tail. These cells have a peculiar metabolism that has not yet been fully understood, but it is clear that they obtain energy from hexoses through glycolisis and/or oxidative phosphorylation. Spermatozoa are in contact with different external environments, beginning from the testicular and epididymal fluid, passing to the seminal plasma and finally to the female genital tract fluids; in addition, with the spread of reproductive biotechnologies, sperm cells are diluted and stored in various media, containing different energetic substrates. To utilize these energetic sources, sperm cells, as other eukaryotic cells, have a well-constructed protein system, that is mainly represented by the GLUT family proteins. These transporters have a membrane-spanning α-helix structure and work as an enzymatic pump that permit a fast gradient dependent passage of sugar molecules through the lipidic bilayer of sperm membrane. Many GLUTs have been studied in man, bull and rat spermatozoa; the presence of some GLUTs has been also demonstrated in boar and dog spermatozoa. The aims of the present study were - to determine the presence of GLUTs 1, 2, 3, 4 and 5 in boar, horse, dog and donkey spermatozoa and to describe their localization; - to study eventual changes in GLUTs location after capacitation and acrosome reaction in boar, stallion and dog spermatozoa; - to determine possible changes in GLUTs localization after capacitation induced by insulin and IGF stimulation in boar spermatozoa; - to evaluate changes in GLUTs localization after flow-cytometric sex sorting in boar sperm cells. GLUTs 1, 2, 3 and 5 presence and localization have been demonstrated in boar, stallion, dog and donkey spermatozoa by western blotting and immunofluorescence analysis; a relocation in GLUTs after capacitation has been observed only in dog sperm cells, while no changes have been observed in the other species examined. As for boar, the stimulation of the capacitation with insulin and IGF didn’t cause any change in GLUTs localization, as well as for the flow cytometric sorting procedure. In conclusion, this study confirms the presence of GLUTs 1, 2 ,3 and 5 in boar, dog, stallion and donkey spermatozoa, while GLUT 4 seems to be absent, as a confirmation of other studies. Only in dog sperm cells capacitating conditions induce a change in GLUTs distribution, even if the physiological role of these changes should be deepened.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

My research PhD work is focused on the Electrochemically Generated Luminescence (ECL) investigation of several different homogeneous and heterogeneous systems. ECL is a redox induced emission, a process whereby species, generated at electrodes, undergo a high-energy electron transfer reaction to form excited states that emit light. Since its first application, the ECL technique has become a very powerful analytical tool and has widely been used in biosensor transduction. ECL presents an intrinsically low noise and high sensitivity; moreover, the electrochemical generation of the excited state prevents scattering of the light source: for all these characteristics, it is an elective technique for ultrasensitive immunoassay detection. The majority of ECL systems involve species in solution where the emission occurs in the diffusion layer near to the electrode surface. However, over the past few years, an intense research has been focused on the ECL generated from species constrained on the electrode surface. The aim of my work is to study the behavior of ECL-generating molecular systems upon the progressive increase of their spatial constraints, that is, passing from isolated species in solution, to fluorophores embedded within a polymeric film and, finally, to patterned surfaces bearing “one-dimensional” emitting spots. In order to describe these trends, I use different “dimensions” to indicate the different classes of compounds. My thesis was mostly developed in the electrochemistry group of Bologna with the supervision of Prof Francesco Paolucci and Dr Massimo Marcaccio. With their help and also thanks to their long experience in the molecular and supramolecular ECL fields and in the surface investigations using scanning probe microscopy techniques, I was able to obtain the results herein described. Moreover, during my research work, I have established a new collaboration with the group of Nanobiotechnology of Prof. Robert Forster (Dublin City University) where I spent a research period. Prof. Forster has a broad experience in the biomedical field, especially he focuses his research on film surfaces biosensor based on the ECL transduction. This thesis can be divided into three sections described as follows: (i) in the fist section, homogeneous molecular and supramolecular ECL-active systems, either organic or inorganic species (i.e., corannulene, dendrimers and iridium metal complex), are described. Driving force for this kind of studies includes the search for new luminophores that display on one hand higher ECL efficiencies and on the other simple mechanisms for modulating intensity and energy of their emission in view of their effective use in bioconjugation applications. (ii) in the second section, the investigation of some heterogeneous ECL systems is reported. Redox polymers comprising inorganic luminophores were described. In such a context, a new conducting platform, based on carbon nanotubes, was developed aimed to accomplish both the binding of a biological molecule and its electronic wiring to the electrode. This is an essential step for the ECL application in the field of biosensors. (iii) in the third section, different patterns were produced on the electrode surface using a Scanning Electrochemical Microscopy. I developed a new methods for locally functionalizing an inert surface and reacting this surface with a luminescent probe. In this way, I successfully obtained a locally ECL active platform for multi-array application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The DNA topology is an important modifier of DNA functions. Torsional stress is generated when right handed DNA is either over- or underwound, producing structural deformations which drive or are driven by processes such as replication, transcription, recombination and repair. DNA topoisomerases are molecular machines that regulate the topological state of the DNA in the cell. These enzymes accomplish this task by either passing one strand of the DNA through a break in the opposing strand or by passing a region of the duplex from the same or a different molecule through a double-stranded cut generated in the DNA. Because of their ability to cut one or two strands of DNA they are also target for some of the most successful anticancer drugs used in standard combination therapies of human cancers. An effective anticancer drug is Camptothecin (CPT) that specifically targets DNA topoisomerase 1 (TOP 1). The research project of the present thesis has been focused on the role of human TOP 1 during transcription and on the transcriptional consequences associated with TOP 1 inhibition by CPT in human cell lines. Previous findings demonstrate that TOP 1 inhibition by CPT perturbs RNA polymerase (RNAP II) density at promoters and along transcribed genes suggesting an involvement of TOP 1 in RNAP II promoter proximal pausing site. Within the transcription cycle, promoter pausing is a fundamental step the importance of which has been well established as a means of coupling elongation to RNA maturation. By measuring nascent RNA transcripts bound to chromatin, we demonstrated that TOP 1 inhibition by CPT can enhance RNAP II escape from promoter proximal pausing site of the human Hypoxia Inducible Factor 1 (HIF-1) and c-MYC genes in a dose dependent manner. This effect is dependent from Cdk7/Cdk9 activities since it can be reversed by the kinases inhibitor DRB. Since CPT affects RNAP II by promoting the hyperphosphorylation of its Rpb1 subunit the findings suggest that TOP 1inhibition by CPT may increase the activity of Cdks which in turn phosphorylate the Rpb1 subunit of RNAP II enhancing its escape from pausing. Interestingly, the transcriptional consequences of CPT induced topological stress are wider than expected. CPT increased co-transcriptional splicing of exon1 and 2 and markedly affected alternative splicing at exon 11. Surprisingly despite its well-established transcription inhibitory activity, CPT can trigger the production of a novel long RNA (5’aHIF-1) antisense to the human HIF-1 mRNA and a known antisense RNA at the 3’ end of the gene, while decreasing mRNA levels. The effects require TOP 1 and are independent from CPT induced DNA damage. Thus, when the supercoiling imbalance promoted by CPT occurs at promoter, it may trigger deregulation of the RNAP II pausing, increased chromatin accessibility and activation/derepression of antisense transcripts in a Cdks dependent manner. A changed balance of antisense transcripts and mRNAs may regulate the activity of HIF-1 and contribute to the control of tumor progression After focusing our TOP 1 investigations at a single gene level, we have extended the study to the whole genome by developing the “Topo-Seq” approach which generates a map of genome-wide distribution of sites of TOP 1 activity sites in human cells. The preliminary data revealed that TOP 1 preferentially localizes at intragenic regions and in particular at 5’ and 3’ ends of genes. Surprisingly upon TOP 1 downregulation, which impairs protein expression by 80%, TOP 1 molecules are mostly localized around 3’ ends of genes, thus suggesting that its activity is essential at these regions and can be compensate at 5’ ends. The developed procedure is a pioneer tool for the detection of TOP 1 cleavage sites across the genome and can open the way to further investigations of the enzyme roles in different nuclear processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrothermal fluids are a fundamental resource for understanding and monitoring volcanic and non-volcanic systems. This thesis is focused on the study of hydrothermal system through numerical modeling with the geothermal simulator TOUGH2. Several simulations are presented, and geophysical and geochemical observables, arising from fluids circulation, are analyzed in detail throughout the thesis. In a volcanic setting, fluids feeding fumaroles and hot spring may play a key role in the hazard evaluation. The evolution of the fluids circulation is caused by a strong interaction between magmatic and hydrothermal systems. A simultaneous analysis of different geophysical and geochemical observables is a sound approach for interpreting monitored data and to infer a consistent conceptual model. Analyzed observables are ground displacement, gravity changes, electrical conductivity, amount, composition and temperature of the emitted gases at surface, and extent of degassing area. Results highlight the different temporal response of the considered observables, as well as the different radial pattern of variation. However, magnitude, temporal response and radial pattern of these signals depend not only on the evolution of fluid circulation, but a main role is played by the considered rock properties. Numerical simulations highlight differences that arise from the assumption of different permeabilities, for both homogeneous and heterogeneous systems. Rock properties affect hydrothermal fluid circulation, controlling both the range of variation and the temporal evolution of the observable signals. Low temperature fumaroles and low discharge rate may be affected by atmospheric conditions. Detailed parametric simulations were performed, aimed to understand the effects of system properties, such as permeability and gas reservoir overpressure, on diffuse degassing when air temperature and barometric pressure changes are applied to the ground surface. Hydrothermal circulation, however, is not only a characteristic of volcanic system. Hot fluids may be involved in several mankind problems, such as studies on geothermal engineering, nuclear waste propagation in porous medium, and Geological Carbon Sequestration (GCS). The current concept for large-scale GCS is the direct injection of supercritical carbon dioxide into deep geological formations which typically contain brine. Upward displacement of such brine from deep reservoirs driven by pressure increases resulting from carbon dioxide injection may occur through abandoned wells, permeable faults or permeable channels. Brine intrusion into aquifers may degrade groundwater resources. Numerical results show that pressure rise drives dense water up to the conduits, and does not necessarily result in continuous flow. Rather, overpressure leads to new hydrostatic equilibrium if fluids are initially density stratified. If warm and salty fluid does not cool passing through the conduit, an oscillatory solution is then possible. Parameter studies delineate steady-state (static) and oscillatory solutions.