19 resultados para flat and curved layer slicing
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
A major weakness of composite materials is that low-velocity impact, introduced accidentally during manufacture, operation or maintenance of the aircraft, may result in delaminations between the plies. Therefore, the first part of this study is focused on mechanics of curved laminates under impact. For this aim, the effect of preloading on impact response of curved composite laminates is considered. By applying the preload, the stress through the thickness and curvature of the laminates increased. The results showed that all impact parameters are varied significantly. For understanding the contribution rate of preloading and pre-stress on the obtained results another test is designed. The interesting phenomenon is that the preloading can decrease the damaged area when the curvature of the both specimens is the same. Finally the effect of curvature type, concave and convex, is investigated under impact loading. In the second part, a new composition of nanofibrous mats are developed to improve the efficiency of curved laminates under impact loading. Therefore, at first some fracture tests are conducted to consider the effect of Nylon 6,6, PCL, and their mixture on mode I and mode II fracture toughness. For this goal, nanofibers are electrospun and interleaved between mid-plane of laminate composite to conduct mode I and mode II tests. The results shows that efficiency of Nylon 6,6 is better than PCL in mode II, while the effect of PCL on fracture toughness of mode I is more. By mixing these nanofibers the shortage of the individual nanofibers is compensated and so the Nylon 6,6/PCL nanofibers could increased mode I and II fracture toughness. Then all these nanofibers are used between all layers of composite layers to investigate their effect on damaged area. The results showed that PCL could decrease the damaged area about 25% and Nylon 6,6 and mixed nanofibers about 50%.
Resumo:
In this thesis work I analyze higher spin field theories from a first quantized perspective, finding in particular new equations describing complex higher spin fields on Kaehler manifolds. They are studied by means of worldline path integrals and canonical quantization, in the framework of supersymmetric spinning particle theories, in order to investigate their quantum properties both in flat and curved backgrounds. For instance, by quantizing a spinning particle with one complex extended supersymmetry, I describe quantum massless (p,0)-forms and find a worldline representation for their effective action on a Kaehler background, as well as exact duality relations. Interesting results are found also in the definition of the functional integral for the so called O(N) spinning particles, that will allow to study real higher spins on curved spaces. In the second part, I study Weyl invariant field theories by using a particular mathematical framework known as tractor calculus, that enable to maintain at each step manifest Weyl covariance.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
The research for this PhD project consisted in the application of the RFs analysis technique to different data-sets of teleseismic events recorded at temporary and permanent stations located in three distinct study regions: Colli Albani area, Northern Apennines and Southern Apennines. We found some velocity models to interpret the structures in these regions, which possess very different geologic and tectonics characteristics and therefore offer interesting case study to face. In the Colli Albani some of the features evidenced in the RFs are shared by all the analyzed stations: the Moho is almost flat and is located at about 23 km depth, and the presence of a relatively shallow limestone layer is a stable feature; contrariwise there are features which vary from station to station, indicating local complexities. Three seismic stations, close to the central part of the former volcanic edifice, display relevant anisotropic signatures with symmetry axes consistent with the emplacement of the magmatic chamber. Two further anisotropic layers are present at greater depth, in the lower crust and the upper mantle, respectively, with symmetry axes directions related to the evolution of the volcano complex. In Northern Apennines we defined the isotropic structure of the area, finding the depth of the Tyrrhenian (almost 25 km and flat) and Adriatic (40 km and dipping underneath the Apennines crests) Mohos. We determined a zone in which the two Mohos overlap, and identified an anisotropic body in between, involved in the subduction and going down with the Adiratic Moho. We interpreted the downgoing anisotropic layer as generated by post-subduction delamination of the top-slab layer, probably made of metamorphosed crustal rocks caught in the subduction channel and buoyantly rising toward the surface. In the Southern Apennines, we found the Moho depth for 16 seismic stations, and highlighted the presence of an anisotropic layer underneath each station, at about 15-20 km below the whole study area. The moho displays a dome-like geometry, as it is shallow (29 km) in the central part of the study area, whereas it deepens peripherally (down to 45 km); the symmetry axes of anisotropic layer, interpreted as a layer separating the upper and the lower crust, show a moho-related pattern, indicated by the foliation of the layer which is parallel to the Moho trend. Moreover, due to the exceptional seismic event occurred on April 6th next to L’Aquila town, we determined the Vs model for two station located next to the epicenter. An extremely high velocity body is found underneath AQU station at 4-10 km depth, reaching Vs of about 4 km/s, while this body is lacking underneath FAGN station. We compared the presence of this body with other recent works and found an anti-correlation between the high Vs body, the max slip patches and earthquakes distribution. The nature of this body is speculative since such high velocities are consistent with deep crust or upper mantle, but can be interpreted as a as high strength barrier of which the high Vs is a typical connotation.
Resumo:
The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.
Resumo:
The topic of my Ph.D. thesis is the finite element modeling of coseismic deformation imaged by DInSAR and GPS data. I developed a method to calculate synthetic Green functions with finite element models (FEMs) and then use linear inversion methods to determine the slip distribution on the fault plane. The method is applied to the 2009 L’Aquila Earthquake (Italy) and to the 2008 Wenchuan earthquake (China). I focus on the influence of rheological features of the earth's crust by implementing seismic tomographic data and the influence of topography by implementing Digital Elevation Models (DEM) layers on the FEMs. Results for the L’Aquila earthquake highlight the non-negligible influence of the medium structure: homogeneous and heterogeneous models show discrepancies up to 20% in the fault slip distribution values. Furthermore, in the heterogeneous models a new area of slip appears above the hypocenter. Regarding the 2008 Wenchuan earthquake, the very steep topographic relief of Longmen Shan Range is implemented in my FE model. A large number of DEM layers corresponding to East China is used to achieve the complete coverage of the FE model. My objective was to explore the influence of the topography on the retrieved coseismic slip distribution. The inversion results reveals significant differences between the flat and topographic model. Thus, the flat models frequently adopted are inappropriate to represent the earth surface topographic features and especially in the case of the 2008 Wenchuan earthquake.
Resumo:
III-nitride materials are very promising for high speed electronics/optical applications but still suffer in performance due to problems during high quality epitaxial growth, evolution of dislocation and defects, less understanding of fundamental physics of materials/processing of devices etc. This thesis mainly focus on GaN based heterostructures to understand the metal-semiconductor interface properties, 2DE(H)G influence on electrical and optical properties, and deep level states in GaN and InAlN, InGaN materials. The detailed electrical characterizations have been employed on Schottky diodes at GaN and InAl(Ga)N/GaN heterostructures in order to understand the metal-semiconductor interface related properties in these materials. I have observed the occurrence of Schottky barrier inhomogenity, role of dislocations in terms of leakage and creating electrically active defect states within energy gap of materials. Deep level transient spectroscopy method is employed on GaN, InAlN and InGaN materials and several defect levels have been observed related to majority and minority carriers. In fact, some defects have been found common in characteristics in ternary layers and GaN layer which indicates that those defect levels are from similar origin, most probably due to Ga/N vacancy in GaN/heterostructures. The role of structural defects, roughness has been extensively understood in terms of enhancing the reverse leakage current, suppressing the mobility in InAlN/AlN/GaN based high electron mobility transistor (HEMT) structures which are identified as key issues for GaN technology. Optical spectroscopy methods have been employed to understand materials quality, sub band and defect related transitions and compared with electrical characterizations. The observation of 2DEG sub band related absorption/emission in optical spectra have been identified and proposed for first time in nitride based polar heterostructures, which is well supported with simulation results. In addition, metal-semiconductor-metal (MSM)-InAl(Ga)N/GaN based photodetector structures have been fabricated and proposed for achieving high efficient optoelectronics devices in future.
Resumo:
This thesis is focused on the study of techniques that allow to have reliable transmission of multimedia content in streaming and broadcasting applications, targeting in particular video content. The design of efficient error-control mechanisms, to enhance video transmission systems reliability, has been addressed considering cross-layer and multi-layer/multi-dimensional channel coding techniques to cope with bit errors as well as packet erasures. Mechanisms for unequal time interleaving have been designed as a viable solution to reduce the impact of errors and erasures by acting on the time diversity of the data flow, thus enhancing robustness against correlated channel impairments. In order to account for the nature of the factors which affect the physical layer channel in the evaluation of FEC schemes performances, an ad-hoc error-event modeling has been devised. In addition, the impact of error correction/protection techniques on the quality perceived by the consumers of video services applications and techniques for objective/subjective quality evaluation have been studied. The applicability and value of the proposed techniques have been tested by considering practical constraints and requirements of real system implementations.
Resumo:
Dynamical models of galaxies are a powerful tool to study and understand several astrophysical problems related to galaxy formation and evolution. This thesis is focussed on a particular type of dynamical models, that are widely used in literature, and are based on the solution of the Jeans equations. By means of a numerical Jeans solver code, developed on purpose and able to build state-of-the-art advanced axisymmetric galaxy models, two of the main currently investigated issues in the field of research of early-type galaxies (ETGs) are addressed. The first topic concerns the hot and X-ray emitting gaseous coronae that surround ETGs. The main goal is to explain why flat and rotating galaxies generally exhibit haloes with lower gas temperatures and luminosities with respect to rounder and velocity dispersion supported systems. The second astrophysical problem addressed concerns instead the stellar initial mass function (IMF) of ETGs. Nowadays, this is a very controversial issue due to a growing number of works on ETGs, based on different and independent techniques, that show evidences of a systematic variation of the IMF normalization as a function of galaxy velocity dispersion or mass. These studies are changing the previous opinion that the IMF of ETGs was the same as that of spiral galaxies, and hence universal throughout the whole large family of galaxies.
Resumo:
Organic molecular semiconductors are subject of intense research for their crucial role as key components of new generation low cost, flexible, and large area electronic devices such as displays, thin-film transistors, solar cells, sensors and logic circuits. In particular, small molecular thienoimide (TI) based materials are emerging as novel multifunctional materials combining a good processability together to ambipolar or n-type charge transport and electroluminescence at the solid state, thus enabling the fabrication of integrated devices like organic field effect transistors (OFETs) and light emitting transistor (OLETs). Given this peculiar combination of characteristics, they also constitute the ideal substrates for fundamental studies on the structure-property relationships in multifunctional molecular systems. In this scenario, this thesis work is focused on the synthesis of new thienoimide based materials with tunable optical, packing, morphology, charge transport and electroluminescence properties by following a fine molecular tailoring, thus optimizing their performances in device as well as investigating and enabling new applications. Investigation on their structure-property relationships has been carried out and in particular, the effect of different π-conjugated cores (heterocycles, length) and alkyl end chain (shape, length) changes have been studied, obtaining materials with enhanced electron transport capability end electroluminescence suitable for the realization of OFETs and single layer OLETs. Moreover, control on the polymorphic behaviour characterizing thienoimide materials has been reached by synthetic and post-synthetic methodologies, developing multifunctional materials from a single polymorphic compound. Finally, with the aim of synthesizing highly pure materials, simplifying the purification steps and avoiding organometallic residues, procedures based on direct arylation reactions replacing conventional cross-couplings have been investigated and applied to different classes of molecules, bearing thienoimidic core or ends, as well as thiophene and anthracene derivatives, validating this approach as a clean alternative for the synthesis of several molecular materials.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
The main object of this thesis is the analysis and the quantization of spinning particle models which employ extended ”one dimensional supergravity” on the worldline, and their relation to the theory of higher spin fields (HS). In the first part of this work we have described the classical theory of massless spinning particles with an SO(N) extended supergravity multiplet on the worldline, in flat and more generally in maximally symmetric backgrounds. These (non)linear sigma models describe, upon quantization, the dynamics of particles with spin N/2. Then we have analyzed carefully the quantization of spinning particles with SO(N) extended supergravity on the worldline, for every N and in every dimension D. The physical sector of the Hilbert space reveals an interesting geometrical structure: the generalized higher spin curvature (HSC). We have shown, in particular, that these models of spinning particles describe a subclass of HS fields whose equations of motions are conformally invariant at the free level; in D = 4 this subclass describes all massless representations of the Poincar´e group. In the third part of this work we have considered the one-loop quantization of SO(N) spinning particle models by studying the corresponding partition function on the circle. After the gauge fixing of the supergravity multiplet, the partition function reduces to an integral over the corresponding moduli space which have been computed by using orthogonal polynomial techniques. Finally we have extend our canonical analysis, described previously for flat space, to maximally symmetric target spaces (i.e. (A)dS background). The quantization of these models produce (A)dS HSC as the physical states of the Hilbert space; we have used an iterative procedure and Pochhammer functions to solve the differential Bianchi identity in maximally symmetric spaces. Motivated by the correspondence between SO(N) spinning particle models and HS gauge theory, and by the notorious difficulty one finds in constructing an interacting theory for fields with spin greater than two, we have used these one dimensional supergravity models to study and extract informations on HS. In the last part of this work we have constructed spinning particle models with sp(2) R symmetry, coupled to Hyper K¨ahler and Quaternionic-K¨ahler (QK) backgrounds.
Resumo:
During the last decade advances in the field of sensor design and improved base materials have pushed the radiation hardness of the current silicon detector technology to impressive performance. It should allow operation of the tracking systems of the Large Hadron Collider (LHC) experiments at nominal luminosity (1034 cm-2s-1) for about 10 years. The current silicon detectors are unable to cope with such an environment. Silicon carbide (SiC), which has recently been recognized as potentially radiation hard, is now studied. In this work it was analyzed the effect of high energy neutron irradiation on 4H-SiC particle detectors. Schottky and junction particle detectors were irradiated with 1 MeV neutrons up to fluence of 1016 cm-2. It is well known that the degradation of the detectors with irradiation, independently of the structure used for their realization, is caused by lattice defects, like creation of point-like defect, dopant deactivation and dead layer formation and that a crucial aspect for the understanding of the defect kinetics at a microscopic level is the correct identification of the crystal defects in terms of their electrical activity. In order to clarify the defect kinetic it were carried out a thermal transient spectroscopy (DLTS and PICTS) analysis of different samples irradiated at increasing fluences. The defect evolution was correlated with the transport properties of the irradiated detector, always comparing with the un-irradiated one. The charge collection efficiency degradation of Schottky detectors induced by neutron irradiation was related to the increasing concentration of defects as function of the neutron fluence.
Resumo:
The motivation for the work presented in this thesis is to retrieve profile information for the atmospheric trace constituents nitrogen dioxide (NO2) and ozone (O3) in the lower troposphere from remote sensing measurements. The remote sensing technique used, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS), is a recent technique that represents a significant advance on the well-established DOAS, especially for what it concerns the study of tropospheric trace consituents. NO2 is an important trace gas in the lower troposphere due to the fact that it is involved in the production of tropospheric ozone; ozone and nitrogen dioxide are key factors in determining the quality of air with consequences, for example, on human health and the growth of vegetation. To understand the NO2 and ozone chemistry in more detail not only the concentrations at ground but also the acquisition of the vertical distribution is necessary. In fact, the budget of nitrogen oxides and ozone in the atmosphere is determined both by local emissions and non-local chemical and dynamical processes (i.e. diffusion and transport at various scales) that greatly impact on their vertical and temporal distribution: thus a tool to resolve the vertical profile information is really important. Useful measurement techniques for atmospheric trace species should fulfill at least two main requirements. First, they must be sufficiently sensitive to detect the species under consideration at their ambient concentration levels. Second, they must be specific, which means that the results of the measurement of a particular species must be neither positively nor negatively influenced by any other trace species simultaneously present in the probed volume of air. Air monitoring by spectroscopic techniques has proven to be a very useful tool to fulfill these desirable requirements as well as a number of other important properties. During the last decades, many such instruments have been developed which are based on the absorption properties of the constituents in various regions of the electromagnetic spectrum, ranging from the far infrared to the ultraviolet. Among them, Differential Optical Absorption Spectroscopy (DOAS) has played an important role. DOAS is an established remote sensing technique for atmospheric trace gases probing, which identifies and quantifies the trace gases in the atmosphere taking advantage of their molecular absorption structures in the near UV and visible wavelengths of the electromagnetic spectrum (from 0.25 μm to 0.75 μm). Passive DOAS, in particular, can detect the presence of a trace gas in terms of its integrated concentration over the atmospheric path from the sun to the receiver (the so called slant column density). The receiver can be located at ground, as well as on board an aircraft or a satellite platform. Passive DOAS has, therefore, a flexible measurement configuration that allows multiple applications. The ability to properly interpret passive DOAS measurements of atmospheric constituents depends crucially on how well the optical path of light collected by the system is understood. This is because the final product of DOAS is the concentration of a particular species integrated along the path that radiation covers in the atmosphere. This path is not known a priori and can only be evaluated by Radiative Transfer Models (RTMs). These models are used to calculate the so called vertical column density of a given trace gas, which is obtained by dividing the measured slant column density to the so called air mass factor, which is used to quantify the enhancement of the light path length within the absorber layers. In the case of the standard DOAS set-up, in which radiation is collected along the vertical direction (zenith-sky DOAS), calculations of the air mass factor have been made using “simple” single scattering radiative transfer models. This configuration has its highest sensitivity in the stratosphere, in particular during twilight. This is the result of the large enhancement in stratospheric light path at dawn and dusk combined with a relatively short tropospheric path. In order to increase the sensitivity of the instrument towards tropospheric signals, measurements with the telescope pointing the horizon (offaxis DOAS) have to be performed. In this circumstances, the light path in the lower layers can become very long and necessitate the use of radiative transfer models including multiple scattering, the full treatment of atmospheric sphericity and refraction. In this thesis, a recent development in the well-established DOAS technique is described, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS). The MAX-DOAS consists in the simultaneous use of several off-axis directions near the horizon: using this configuration, not only the sensitivity to tropospheric trace gases is greatly improved, but vertical profile information can also be retrieved by combining the simultaneous off-axis measurements with sophisticated RTM calculations and inversion techniques. In particular there is a need for a RTM which is capable of dealing with all the processes intervening along the light path, supporting all DOAS geometries used, and treating multiple scattering events with varying phase functions involved. To achieve these multiple goals a statistical approach based on the Monte Carlo technique should be used. A Monte Carlo RTM generates an ensemble of random photon paths between the light source and the detector, and uses these paths to reconstruct a remote sensing measurement. Within the present study, the Monte Carlo radiative transfer model PROMSAR (PROcessing of Multi-Scattered Atmospheric Radiation) has been developed and used to correctly interpret the slant column densities obtained from MAX-DOAS measurements. In order to derive the vertical concentration profile of a trace gas from its slant column measurement, the AMF is only one part in the quantitative retrieval process. One indispensable requirement is a robust approach to invert the measurements and obtain the unknown concentrations, the air mass factors being known. For this purpose, in the present thesis, we have used the Chahine relaxation method. Ground-based Multiple AXis DOAS, combined with appropriate radiative transfer models and inversion techniques, is a promising tool for atmospheric studies in the lower troposphere and boundary layer, including the retrieval of profile information with a good degree of vertical resolution. This thesis has presented an application of this powerful comprehensive tool for the study of a preserved natural Mediterranean area (the Castel Porziano Estate, located 20 km South-West of Rome) where pollution is transported from remote sources. Application of this tool in densely populated or industrial areas is beginning to look particularly fruitful and represents an important subject for future studies.
Resumo:
During the previous 10 years, global R&D expenditure in the pharmaceuticals and biotechnology sector has steadily increased, without a corresponding increase in output of new medicines. To address this situation, the biopharmaceutical industry's greatest need is to predict the failures at the earliest possible stage of the drug development process. A major key to reducing failures in drug screenings is the development and use of preclinical models that are more predictive of efficacy and safety in clinical trials. Further, relevant animal models are needed to allow a wider testing of novel hypotheses. Key to this is the developing, refining, and validating of complex animal models that directly link therapeutic targets to the phenotype of disease, allowing earlier prediction of human response to medicines and identification of safety biomarkers. Morehover, well-designed animal studies are essential to bridge the gap between test in cell cultures and people. Zebrafish is emerging, complementary to other models, as a powerful system for cancer studies and drugs discovery. We aim to investigate this research area designing a new preclinical cancer model based on the in vivo imaging of zebrafish embryogenesis. Technological advances in imaging have made it feasible to acquire nondestructive in vivo images of fluorescently labeled structures, such as cell nuclei and membranes, throughout early Zebrafishsh embryogenesis. This In vivo image-based investigation provides measurements for a large number of features at cellular level and events including nuclei movements, cells counting, and mitosis detection, thereby enabling the estimation of more significant parameters such as proliferation rate, highly relevant for investigating anticancer drug effects. In this work, we designed a standardized procedure for accessing drug activity at the cellular level in live zebrafish embryos. The procedure includes methodologies and tools that combine imaging and fully automated measurements of embryonic cell proliferation rate. We achieved proliferation rate estimation through the automatic classification and density measurement of epithelial enveloping layer and deep layer cells. Automatic embryonic cells classification provides the bases to measure the variability of relevant parameters, such as cell density, in different classes of cells and is finalized to the estimation of efficacy and selectivity of anticancer drugs. Through these methodologies we were able to evaluate and to measure in vivo the therapeutic potential and overall toxicity of Dbait and Irinotecan anticancer molecules. Results achieved on these anticancer molecules are presented and discussed; furthermore, extensive accuracy measurements are provided to investigate the robustness of the proposed procedure. Altogether, these observations indicate that zebrafish embryo can be a useful and cost-effective alternative to some mammalian models for the preclinical test of anticancer drugs and it might also provides, in the near future, opportunities to accelerate the process of drug discovery.