13 resultados para self-consistent-field
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
This thesis analyzes theoretically and computationally the phenomenon of partial ionization of the substitutional dopants in Silicon Carbide at thermal equilibrium. It is based on the solution of the charge neutrality equation and takes into account the following phenomena: several energy levels in the bandgap; Fermi-Dirac statistics for free carriers; screening effects on the dopant ionization energies; the formation of impurity bands. A self-consistent model and a corresponding simulation software have been realized. A preliminary comparison of our calculations with existing experimental results is carried out.
Resumo:
Graphene, that is a monolayer of carbon atoms arranged in a honeycomb lattice, has been isolated only recently from graphite. This material shows very attractive physical properties, like superior carrier mobility, current carrying capability and thermal conductivity. In consideration of that, graphene has been the object of large investigation as a promising candidate to be used in nanometer-scale devices for electronic applications. In this work, graphene nanoribbons (GNRs), that are narrow strips of graphene, for which a band-gap is induced by the quantum confinement of carriers in the transverse direction, have been studied. As experimental GNR-FETs are still far from being ideal, mainly due to the large width and edge roughness, an accurate description of the physical phenomena occurring in these devices is required to have valuable predictions about the performance of these novel structures. A code has been developed to this purpose and used to investigate the performance of 1 to 15-nm wide GNR-FETs. Due to the importance of an accurate description of the quantum effects in the operation of graphene devices, a full-quantum transport model has been adopted: the electron dynamics has been described by a tight-binding (TB) Hamiltonian model and transport has been solved within the formalism of the non-equilibrium Green's functions (NEGF). Both ballistic and dissipative transport are considered. The inclusion of the electron-phonon interaction has been taken into account in the self-consistent Born approximation. In consideration of their different energy band-gap, narrow GNRs are expected to be suitable for logic applications, while wider ones could be promising candidates as channel material for radio-frequency applications.
Resumo:
The purpose of this research is to provide empirical evidence on determinants of the economic use of patented inventions in order to contribute to the literature on technology and innovation management. The current work consists of three main parts, each of which constitutes a self-consistent research paper. The first paper uses a meta-analytic approach to review and synthesize the existing body of empirical research on the determinants of technology licensing. The second paper investigates the factors affecting the choice between the following alternative economic uses of patented inventions: pure internal use, pure licensing, and mixed use. Finally, the third paper explores the least studied option of the economic use of patented inventions, namely, the sale of patent rights. The data to empirically test the hypotheses come from a large-scale survey of European Patent inventors resident in 21 European countries, Japan, and US. The findings provided in this dissertation contribute to a better understanding of the economic use of patented inventions by expanding the limits of previous research in several different dimensions.
Resumo:
Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.
Resumo:
This work thesis focuses on the Helicon Plasma Thruster (HPT) as a candidate for generating thrust for small satellites and CubeSats. Two main topics are addressed: the development of a Global Model (GM) and a 3D self-consistent numerical tool. The GM is suitable for preliminary analysis of HPTs with noble gases such as argon, neon, krypton, and xenon, and alternative propellants such as air and iodine. A lumping methodology is developed to reduce the computational cost when modelling the excited species in the plasma chemistry. A 3D self-consistent numerical tool is also developed that can treat discharges with a generic 3D geometry and model the actual plasma-antenna coupling. The tool consists of two main modules, an EM module and a FLUID module, which run iteratively until a steady state solution is converged. A third module is available for solving the plume with a simplified semi-analytical approach, a PIC code, or directly by integration of the fluid equations. Results obtained from both the numerical tools are benchmarked against experimental measures of HPTs or Helicon reactors, obtaining very good qualitative agreement with the experimental trend for what concerns the GM, and an excellent agreement of the physical trends predicted against the measured data for the 3D numerical strategy.
Resumo:
Organic printed electronics is attracting an ever-growing interest in the last decades because of its impressive breakthroughs concerning the chemical design of π-conjugated materials and their processing. This has an impact on novel applications, such as flexible-large-area displays, low- cost printable circuits, plastic solar cells and lab-on-a-chip devices. The organic field-effect transistor (OFET) relies on a thin film of organic semiconductor that bridges source and drain electrodes. Since its first discovery in the 80s, intensive research activities were deployed in order to control the chemico-physical properties of these electronic devices and consequently their charge. Self-assembled monolayers (SAMs) are a versatile tool for tuning the properties of metallic, semi-conducting, and insulating surfaces. Within this context, OFETs represent reliable instruments for measuring the electrical properties of the SAMs in a Metal/SAM/OS junction. Our experimental approach, named Charge Injection Organic-Gauge (CIOG), uses OTFT in a charge-injection controlled regime. The CIOG sensitivity has been extensively demonstrated on different homologous self-assembling molecules that differ in either chain length or in anchor/terminal group. One of the latest applications of organic electronics is the so-called “bio-electronics” that makes use of electronic devices to encompass interests of the medical science, such as biosensors, biotransducers etc… As a result, thee second part of this thesis deals with the realization of an electronic transducer based on an Organic Field-Effect Transistor operating in aqueous media. Here, the conventional bottom gate/bottom contact configuration is replaced by top gate architecture with the electrolyte that ensures electrical contact between the top gold electrode and the semiconductor layer. This configuration is named Electrolyte-Gated Field-Effect Transistor (EGOFET). The functionalization of the top electrode is the sensing core of the device allowing the detection of dopamine as well as of protein biomarkers with ultra-low sensitivity.
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
This study aims at analysing Brian O'Nolans literary production in the light of a reconsideration of the role played by his two most famous pseudonyms ,Flann Brien and Myles na Gopaleen, behind which he was active both as a novelist and as a journalist. We tried to establish a new kind of relationship between them and their empirical author following recent cultural and scientific surveys in the field of Humour Studies, Psychology, and Sociology: taking as a starting point the appreciation of the comic attitude in nature and in cultural history, we progressed through a short history of laughter and derision, followed by an overview on humour theories. After having established such a frame, we considered an integration of scientific studies in the field of laughter and humour as a base for our study scheme, in order to come to a definition of the comic author as a recognised, powerful and authoritative social figure who acts as a critic of conventions. The history of laughter and comic we briefly summarized, based on the one related by the French scholar Georges Minois in his work (Minois 2004), has been taken into account in the view that humorous attitude is one of manâs characteristic traits always present and witnessed throughout the ages, though subject in most cases to repression by cultural and political conservative power. This sort of Super-Ego notwithstanding, or perhaps because of that, comic impulse proved irreducible exactly in its influence on the current cultural debates. Basing mainly on Robert R. Provineâs (Provine 2001), Fabio Ceccarelliâs (Ceccarelli 1988), Arthur Koestlerâs (Koestler 1975) and Peter L. Bergerâs (Berger 1995) scientific essays on the actual occurrence of laughter and smile in complex social situations, we underlined the many evidences for how the use of comic, humour and wit (in a Freudian sense) could be best comprehended if seen as a common mind process designed for the improvement of knowledge, in which we traced a strict relation with the play-element the Dutch historian Huizinga highlighted in his famous essay, Homo Ludens (Huizinga 1955). We considered comic and humour/wit as different sides of the same coin, and showed how the demonstrations scientists provided on this particular subject are not conclusive, given that the mental processes could not still be irrefutably shown to be separated as regards graduations in comic expression and reception: in fact, different outputs in expressions might lead back to one and the same production process, following the general âEconomy Ruleâ of evolution; man is the only animal who lies, meaning with this that one feeling is not necessarily biuniquely associated with one and the same outward display, so human expressions are not validation proofs for feelings. Considering societies, we found that in nature they are all organized in more or less the same way, that is, in élites who govern over a community who, in turn, recognizes them as legitimate delegates for that task; we inferred from this the epistemological possibility for the existence of an added ruling figure alongside those political and religious: this figure being the comic, who is the person in charge of expressing true feelings towards given subjects of contention. Any community owns one, and his very peculiar status is validated by the fact that his place is within the community, living in it and speaking to it, but at the same time is outside it in the sense that his action focuses mainly on shedding light on ideas and objects placed out-side the boundaries of social convention: taboos, fears, sacred objects and finally culture are the favourite targets of the comic personâs arrow. This is the reason for the word a(rche)typical as applied to the comic figure in society: atypical in a sense, because unconventional and disrespectful of traditions, critical and never at ease with unblinkered respect of canons; archetypical, because the âvillage foolâ, buffoon, jester or anyone in any kind of society who plays such roles, is an archetype in the Jungian sense, i.e. a personification of an irreducible side of human nature that everybody instinctively knows: a beginner of a tradition, the perfect type, what is most conventional of all and therefore the exact opposite of an atypical. There is an intrinsic necessity, we think, of such figures in societies, just like politicians and priests, who should play an elitist role in order to guide and rule not for their own benefit but for the good of the community. We are not naïve and do know that actual owners of power always tend to keep it indefinitely: the âsocial comicâ as a role of power has nonetheless the distinctive feature of being the only job whose tension is not towards stability. It has got in itself the rewarding permission of contradiction, for the very reason we exposed before that the comic must cast an eye both inside and outside society and his vision may be perforce not consistent, then it is satisfactory for the popularity that gives amongst readers and audience. Finally, the difference between governors, priests and comic figures is the seriousness of the first two (fundamentally monologic) and the merry contradiction of the third (essentially dialogic). MPs, mayors, bishops and pastors should always console, comfort and soothe popular mood in respect of the public convention; the comic has the opposite task of provoking, urging and irritating, accomplishing at the same time a sort of control of the soothing powers of society, keepers of the righteousness. In this view, the comic person assumes a paramount importance in the counterbalancing of power administration, whether in form of acting in public places or in written pieces which could circulate for private reading. At this point comes into question our Irish writer Brian O'Nolan(1911-1966), real name that stood behind the more famous masks of Flann O'Brien, novelist, author of At Swim-Two-Birds (1939), The Hard Life (1961), The Dalkey Archive (1964) and, posthumously, The Third Policeman (1967); and of Myles na Gopaleen, journalist, keeper for more than 25 years of the Cruiskeen Lawn column on The Irish Times (1940-1966), and author of the famous book-parody in Irish An Béal Bocht (1941), later translated in English as The Poor Mouth (1973). Brian O'Nolan, professional senior civil servant of the Republic, has never seen recognized his authorship in literary studies, since all of them concentrated on his alter egos Flann, Myles and some others he used for minor contributions. So far as we are concerned, we think this is the first study which places the real name in the title, this way acknowledging him an unity of intents that no-one before did. And this choice in titling is not a mere mark of distinction for the sake of it, but also a wilful sign of how his opus should now be reconsidered. In effect, the aim of this study is exactly that of demonstrating how the empirical author Brian O'Nolan was the real Deus in machina, the master of puppets who skilfully directed all of his identities in planned directions, so as to completely fulfil the role of the comic figure we explained before. Flann O'Brien and Myles na Gopaleen were personae and not persons, but the impression one gets from the critical studies on them is the exact opposite. Literary consideration, that came only after O'Nolans death, began with Anne Clissmannâs work, Flann O'Brien: A Critical Introduction to His Writings (Clissmann 1975), while the most recent book is Keith Donohueâs The Irish Anatomist: A Study of Flann O'Brien (Donohue 2002); passing through M.Keith Bookerâs Flann O'Brien, Bakhtin and Menippean Satire (Booker 1995), Keith Hopperâs Flann O'Brien: A Portrait of the Artist as a Young Post-Modernist (Hopper 1995) and Monique Gallagherâs Flann O'Brien, Myles et les autres (Gallagher 1998). There have also been a couple of biographies, which incidentally somehow try to explain critical points his literary production, while many critical studies do the same on the opposite side, trying to found critical points of view on the authorâs restless life and habits. At this stage, we attempted to merge into O'Nolan's corpus the journalistic articles he wrote, more than 4,200, for roughly two million words in the 26-year-old running of the column. To justify this, we appealed to several considerations about the figure O'Nolan used as writer: Myles na Gopaleen (later simplified in na Gopaleen), who was the equivalent of the street artist or storyteller, speaking to his imaginary public and trying to involve it in his stories, quarrels and debates of all kinds. First of all, he relied much on language for the reactions he would obtain, playing on, and with, words so as to ironically unmask untrue relationships between words and things. Secondly, he pushed to the limit the convention of addressing to spectators and listeners usually employed in live performing, stretching its role in the written discourse to come to a greater effect of involvement of readers. Lastly, he profited much from what we labelled his âspecific weightâ, i.e. the potential influence in society given by his recognised authority in determined matters, a position from which he could launch deeper attacks on conventional beliefs, so complying with the duty of a comic we hypothesised before: that of criticising society even in threat of losing the benefits the post guarantees. That seemingly masochistic tendency has its rationale. Every representative has many privileges on the assumption that he, or she, has great responsibilities in administrating. The higher those responsibilities are, the higher is the reward but also the severer is the punishment for the misfits done while in charge. But we all know that not everybody accepts the rules and many try to use their power for their personal benefit and do not want to undergo lawâs penalties. The comic, showing in this case more civic sense than others, helped very much in this by the non-accessibility to the use of public force, finds in the role of the scapegoat the right accomplishment of his task, accepting the punishment when his breaking of the conventions is too stark to be forgiven. As Ceccarelli demonstrated, the role of the object of laughter (comic, ridicule) has its very own positive side: there is freedom of expression for the person, and at the same time integration in the society, even though at low levels. Then the banishment of a âsocialâ comic can never get to total extirpation from society, revealing how the scope of the comic lies on an entirely fictional layer, bearing no relation with facts, nor real consequences in terms of physical health. Myles na Gopaleen, mastering these three characteristics we postulated in the highest way, can be considered an author worth noting; and the oeuvre he wrote, the whole collection of Cruiskeen Lawn articles, is rightfully a novel because respects the canons of it especially regarding the authorial figure and his relationship with the readers. In addition, his work can be studied even if we cannot conduct our research on the whole of it, this proceeding being justified exactly because of the resemblances to the real figure of the storyteller: its âchaptersâ âthe daily articlesâ had a format that even the distracted reader could follow, even one who did not read each and every article before. So we can critically consider also a good part of them, as collected in the seven volumes published so far, with the addition of some others outside the collections, because completeness in this case is not at all a guarantee of a better precision in the assessment; on the contrary: examination of the totality of articles might let us consider him as a person and not a persona. Once cleared these points, we proceeded further in considering tout court the works of Brian O'Nolan as the works of a unique author, rather than complicating the references with many names which are none other than well-wrought sides of the same personality. By putting O'Nolan as the correct object of our research, empirical author of the works of the personae Flann O'Brien and Myles na Gopaleen, there comes out a clearer literary landscape: the comic author Brian O'Nolan, self-conscious of his paramount role in society as both a guide and a scourge, in a word as an a(rche)typical, intentionally chose to differentiate his personalities so as to create different perspectives in different fields of knowledge by using, in addition, different means of communication: novels and journalism. We finally compared the newly assessed author Brian O'Nolan with other great Irish comic writers in English, such as James Joyce (the one everybody named as the master in the field), Samuel Beckett, and Jonathan Swift. This comparison showed once more how O'Nolan is in no way inferior to these authors who, greatly celebrated by critics, have nonetheless failed to achieve that great public recognition OâNolan received alias Myles, awarded by the daily audience he reached and influenced with his Cruiskeen Lawn column. For this reason, we believe him to be representative of the comic figureâs function as a social regulator and as a builder of solidarity, such as that Raymond Williams spoke of in his work (Williams 1982), with in mind the aim of building a âculture in commonâ. There is no way for a âculture in commonâ to be acquired if we do not accept the fact that even the most functional society rests on conventions, and in a world more and more âconnectedâ we need someone to help everybody negotiate with different cultures and persons. The comic gives us a worldly perspective which is at the same time comfortable and distressing but in the end not harmful as the one furnished by politicians could be: he lets us peep into parallel worlds without moving too far from our armchair and, as a consequence, is the one who does his best for the improvement of our understanding of things.
Resumo:
I Max Bill is an intense giornata of a big fresco. An analysis of the main social, artistic and cultural events throughout the twentieth century is needed in order to trace his career through his masterpieces and architectures. Some of the faces of this hypothetical mural painting are, among others, Le Corbusier, Walter Gropius, Ernesto Nathan Rogers, Kandinskij, Klee, Mondrian, Vatongerloo, Ignazio Silone, while the backcloth is given by artistic avant-gardes, Bauhaus, International Exhibitions, CIAM, war events, reconstruction, Milan Triennali, Venice Biennali, the School of Ulm. Architect, even though more known as painter, sculptor, designer and graphic artist, Max Bill attends the Bauhaus as a student in the years 1927-1929, and from this experience derives the main features of a rational, objective, constructive and non figurative art. His research is devoted to give his art a scientific methodology: each work proceeds from the analysis of a problem to the logical and always verifiable solution of the same problem. By means of composition elements (such as rhythm, seriality, theme and its variation, harmony and dissonance), he faces, with consistent results, themes apparently very distant from each other as the project for the H.f.G. or the design for a font. Mathematics are a constant reference frame as field of certainties, order, objectivity: ‘for Bill mathematics are never confined to a simple function: they represent a climate of spiritual certainties, and also the theme of non attempted in its purest state, objectivity of the sign and of the geometrical place, and at the same time restlessness of the infinity: Limited and Unlimited ’. In almost sixty years of activity, experiencing all artistic fields, Max Bill works, projects, designs, holds conferences and exhibitions in Europe, Asia and Americas, confronting himself with the most influencing personalities of the twentieth century. In such a vast scenery, the need to limit the investigation field combined with the necessity to address and analyse the unpublished and original aspect of Bill’s relations with Italy. The original contribution of the present research regards this particular ‘geographic delimitation’; in particular, beyond the deep cultural exchanges between Bill and a series of Milanese architects, most of all with Rogers, two main projects have been addressed: the realtà nuova at Milan Triennale in 1947, and the Contemporary Art Museum in Florence in 1980. It is important to note that these projects have not been previously investigated, and the former never appears in the sources either. These works, together with the most well-known ones, such as the projects for the VI and IX Triennale, and the Swiss pavilion for the Biennale, add important details to the reference frame of the relations which took place between Zurich and Milan. Most of the occasions for exchanges took part in between the Thirties and the Fifties, years during which Bill underwent a significant period of artistic growth. He meets the Swiss progressive architects and the Paris artists from the Abstraction-Création movement, enters the CIAM, collaborates with Le Corbusier to the third volume of his Complete Works, and in Milan he works and gets confronted with the events related to post-war reconstruction. In these years Bill defines his own working methodology, attaining an artistic maturity in his work. The present research investigates the mentioned time period, despite some necessary exceptions. II The official Max Bill bibliography is naturally wide, including spreading works along with ones more devoted to analytical investigation, mainly written in German and often translated into French and English (Max Bill himself published his works in three languages). Few works have been published in Italian and, excluding the catalogue of the Parma exhibition from 1977, they cannot be considered comprehensive. Many publications are exhibition catalogues, some of which include essays written by Max Bill himself, some others bring Bill’s comments in a educational-pedagogical approach, to accompany the observer towards a full understanding of the composition processes of his art works. Bill also left a great amount of theoretical speculations to encourage a critical reading of his works in the form of books edited or written by him, and essays published in ‘Werk’, magazine of the Swiss Werkbund, and other international reviews, among which Domus and Casabella. These three reviews have been important tools of analysis, since they include tracks of some of Max Bill’s architectural works. The architectural aspect is less investigated than the plastic and pictorial ones in all the main reference manuals on the subject: Benevolo, Tafuri and Dal Co, Frampton, Allenspach consider Max Bill as an artist proceeding in his work from Bauhaus in the Ulm experience . A first filing of his works was published in 2004 in the monographic issue of the Spanish magazine 2G, together with critical essays by Karin Gimmi, Stanislaus von Moos, Arthur Rüegg and Hans Frei, and in ‘Konkrete Architektur?’, again by Hans Frei. Moreover, the monographic essay on the Atelier Haus building by Arthur Rüegg from 1997, and the DPA 17 issue of the Catalonia Polytechnic with contributions of Carlos Martì, Bruno Reichlin and Ton Salvadò, the latter publication concentrating on a few Bill’s themes and architectures. An urge to studying and going in depth in Max Bill’s works was marked in 2008 by the centenary of his birth and by a recent rediscovery of Bill as initiator of the ‘minimalist’ tradition in Swiss architecture. Bill’s heirs are both very active in promoting exhibitions, researching and publishing. Jakob Bill, Max Bill’s son and painter himself, recently published a work on Bill’s experience in Bauhaus, and earlier on he had published an in-depth study on ‘Endless Ribbons’ sculptures. Angela Thomas Schmid, Bill’s wife and art historian, published in end 2008 the first volume of a biography on Max Bill and, together with the film maker Eric Schmid, produced a documentary film which was also presented at the last Locarno Film Festival. Both biography and documentary concentrate on Max Bill’s political involvement, from antifascism and 1968 protest movements to Bill experiences as Zurich Municipality councilman and member of the Swiss Confederation Parliament. In the present research, the bibliography includes also direct sources, such as interviews and original materials in the form of letters correspondence and graphic works together with related essays, kept in the max+binia+jakob bill stiftung archive in Zurich. III The results of the present research are organized into four main chapters, each of them subdivided into four parts. The first chapter concentrates on the research field, reasons, tools and methodologies employed, whereas the second one consists of a short biographical note organized by topics, introducing the subject of the research. The third chapter, which includes unpublished events, traces the historical and cultural frame with particular reference to the relations between Max Bill and the Italian scene, especially Milan and the architects Rogers and Baldessari around the Fifties, searching the themes and the keys for interpretation of Bill’s architectures and investigating the critical debate on the reviews and the plastic survey through sculpture. The fourth and last chapter examines four main architectures chosen on a geographical basis, all devoted to exhibition spaces, investigating Max Bill’s composition process related to the pictorial field. Paintings has surely been easier and faster to investigate and verify than the building field. A doctoral thesis discussed in Lausanne in 1977 investigating Max Bill’s plastic and pictorial works, provided a series of devices which were corrected and adapted for the definition of the interpretation grid for the composition structures of Bill’s main architectures. Four different tools are employed in the investigation of each work: a context analysis related to chapter three results; a specific theoretical essay by Max Bill briefly explaining his main theses, even though not directly linked to the very same work of art considered; the interpretation grid for the composition themes derived from a related pictorial work; the architecture drawing and digital three-dimensional model. The double analysis of the architectural and pictorial fields is functional to underlining the relation among the different elements of the composition process; the two fields, however, cannot be compared and they stay, in Max Bill’s works as in the present research, interdependent though self-sufficient. IV An important aspect of Max Bill production is self-referentiality: talking of Max Bill, also through Max Bill, as a need for coherence instead of a method limitation. Ernesto Nathan Rogers describes Bill as the last humanist, and his horizon is the known world but, as the ‘Concrete Art’ of which he is one of the main representatives, his production justifies itself: Max Bill not only found a method, but he autonomously re-wrote the ‘rules of the game’, derived timeless theoretical principles and verified them through a rich and interdisciplinary artistic production. The most recurrent words in the present research work are synthesis, unity, space and logic. These terms are part of Max Bill’s vocabulary and can be referred to his works. Similarly, graphic settings or analytical schemes in this research text referring to or commenting Bill’s architectural projects were drawn up keeping in mind the concise precision of his architectural design. As for Mies van der Rohe, it has been written that Max Bill took art to ‘zero degree’ reaching in this way a high complexity. His works are a synthesis of art: they conceptually encompass all previous and –considered their developments- most of contemporary pictures. Contents and message are generally explicitly declared in the title or in Bill’s essays on his artistic works and architectural projects: the beneficiary is invited to go through and re-build the process of synthesis generating the shape. In the course of the interview with the Milan artist Getulio Alviani, he tells how he would not write more than a page for an essay on Josef Albers: everything was already evident ‘on the surface’ and any additional sentence would be redundant. Two years after that interview, these pages attempt to decompose and single out the elements and processes connected with some of Max Bill’s works which, for their own origin, already contain all possible explanations and interpretations. The formal reduction in favour of contents maximization is, perhaps, Max Bill’s main lesson.
Resumo:
The aim of this PhD thesis was to study at a microscopic level different liquid crystal (LC) systems, in order to determine their physical properties, resorting to two distinct methodologies, one involving computer simulations, and the other spectroscopic techniques, in particular electron spin resonance (ESR) spectroscopy. By means of the computer simulation approach we tried to demonstrate this tool effectiveness for calculating anisotropic static properties of a LC material, as well as for predicting its behaviour and features. This required the development and adoption of suitable molecular models based on a convenient intermolecular potentials reflecting the essential molecular features of the investigated system. In particular, concerning the simulation approach, we have set up models for discotic liquid crystal dimers and we have studied, by means of Monte Carlo simulations, their phase behaviour and self-assembling properties, with respect to the simple monomer case. Each discotic dimer is described by two oblate GayBerne ellipsoids connected by a flexible spacer, modelled by a harmonic "spring" of three different lengths. In particular we investigated the effects of dimerization on the transition temperatures, as well as on the characteristics of molecular aggregation displayed and the relative orientational order. Moving to the experimental results, among the many experimental techniques that are typically employed to evaluate LC system distinctive features, ESR has proved to be a powerful tool in microscopic scale investigation of the properties, structure, order and dynamics of these materials. We have taken advantage of the high sensitivity of the ESR spin probe technique to investigate increasingly complex LC systems ranging from devices constituted by a polymer matrix in which LC molecules are confined in shape of nano- droplets, as well as biaxial liquid crystalline elastomers, and dimers whose monomeric units or lateral groups are constituted by rod-like mesogens (11BCB). Reflection-mode holographic-polymer dispersed liquid crystals (H-PDLCs) are devices in which LCs are confined into nanosized (50-300 nm) droplets, arranged in layers which alternate with polymer layers, forming a diffraction grating. We have determined the configuration of the LC local director and we have derived a model of the nanodroplet organization inside the layers. Resorting also to additional information on the nanodroplet size and shape distribution provided by SEM images of the H-PDLC cross-section, the observed director configuration has been modeled as a bidimensional distribution of elongated nanodroplets whose long axis is, on the average, parallel to the layers and whose internal director configuration is a uniaxial quasi- monodomain aligned along the nanodroplet long axis. The results suggest that the molecular organization is dictated mainly by the confinement, explaining, at least in part, the need for switching voltages significantly higher and the observed faster turn-off times in H-PDLCs compared to standard PDLC devices. Liquid crystal elastomers consist in cross-linked polymers, in which mesogens represent the monomers constituting the main chain or the laterally attached side groups. They bring together three important aspects: orientational order in amorphous soft materials, responsive molecular shape and quenched topological constraints. In biaxial nematic liquid crystalline elastomers (BLCEs), two orthogonal directions, rather than the one of normal uniaxial nematic, can be controlled, greatly enhancing their potential value for applications as novel actuators. Two versions of a side-chain BLCEs were characterized: side-on and end-on. Many tests have been carried out on both types of LCE, the main features detected being the lack of a significant dynamical behaviour, together with a strong permanent alignment along the principal director, and the confirmation of the transition temperatures already determined by DSC measurements. The end-on sample demonstrates a less hindered rotation of the side group mesogenic units and a greater freedom of alignment to the magnetic field, as already shown by previous NMR studies. Biaxial nematic ESR static spectra were also obtained on the basis of Molecular Dynamics generated biaxial configurations, to be compared to the experimentally determined ones, as a mean to establish a possible relation between biaxiality and the spectral features. This provides a concrete example of the advantages of combining the computer simulation and spectroscopic approaches. Finally, the dimer α,ω-bis(4'-cyanobiphenyl-4-yl)undecane (11BCB), synthesized in the "quest" for the biaxial nematic phase has been analysed. Its importance lies in the dimer significance as building blocks in the development of new materials to be employed in innovative technological applications, such as faster switching displays, resorting to the easier aligning ability of the secondary director in biaxial phases. A preliminary series of tests were performed revealing the population of mesogenic molecules as divided into two groups: one of elongated straightened conformers sharing a common director, and one of bent molecules, which display no order, being equally distributed in the three dimensions. Employing this model, the calculated values show a consistent trend, confirming at the same time the transition temperatures indicated by the DSC measurements, together with rotational diffusion tensor values that follow closely those of the constituting monomer 5CB.
Resumo:
Molecular self-assembly takes advantage of supramolecular non-covalent interactions (ionic, hydrophobic, van der Waals, hydrogen and coordination bonds) for the construction of organized and tunable systems. In this field, lipophilic guanosines can represent powerful building blocks thanks to their aggregation proprieties in organic solvents, which can be controlled by addition or removal of cations. For example, potassium ion can template the formation of piled G-quartets structures, while in its absence ribbon-like G aggregates are generated in solution. In this thesis we explored the possibility of using guanosines as scaffolds to direct the construction of ordered and self-assembled architectures, one of the main goals of bottom-up approach in nanotechnology. In Chapter III we will describe Langmuir-Blodgett films obtained from guanosines and other lipophilic nucleosides, revealing the “special” behavior of guanine in comparison with the other nucleobases. In Chapter IV we will report the synthesis of several thiophene-functionalized guanosines and the studies towards their possible use in organic electronics: the pre-programmed organization of terthiophene residues in ribbon aggregates could allow charge conduction through π-π stacked oligothiophene functionalities. The construction and the behavior of some simple electronic nanodevices based on these organized thiopehene-guanosine hybrids has been explored.
Resumo:
Magnetic Resonance Imaging (MRI) is the in vivo technique most commonly employed to characterize changes in brain structures. The conventional MRI-derived morphological indices are able to capture only partial aspects of brain structural complexity. Fractal geometry and its most popular index, the fractal dimension (FD), can characterize self-similar structures including grey matter (GM) and white matter (WM). Previous literature shows the need for a definition of the so-called fractal scaling window, within which each structure manifests self-similarity. This justifies the existence of fractal properties and confirms Mandelbrot’s assertion that "fractals are not a panacea; they are not everywhere". In this work, we propose a new approach to automatically determine the fractal scaling window, computing two new fractal descriptors, i.e., the minimal and maximal fractal scales (mfs and Mfs). Our method was implemented in a software package, validated on phantoms and applied on large datasets of structural MR images. We demonstrated that the FD is a useful marker of morphological complexity changes that occurred during brain development and aging and, using ultra-high magnetic field (7T) examinations, we showed that the cerebral GM has fractal properties also below the spatial scale of 1 mm. We applied our methodology in two neurological diseases. We observed the reduction of the brain structural complexity in SCA2 patients and, using a machine learning approach, proved that the cerebral WM FD is a consistent feature in predicting cognitive decline in patients with small vessel disease and mild cognitive impairment. Finally, we showed that the FD of the WM skeletons derived from diffusion MRI provides complementary information to those obtained from the FD of the WM general structure in T1-weighted images. In conclusion, the fractal descriptors of structural brain complexity are candidate biomarkers to detect subtle morphological changes during development, aging and in neurological diseases.