13 resultados para tree-dimensional analytical solution

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis starts showing the main characteristics and application fields of the AlGaN/GaN HEMT technology, focusing on reliability aspects essentially due to the presence of low frequency dispersive phenomena which limit in several ways the microwave performance of this kind of devices. Based on an equivalent voltage approach, a new low frequency device model is presented where the dynamic nonlinearity of the trapping effect is taken into account for the first time allowing considerable improvements in the prediction of very important quantities for the design of power amplifier such as power added efficiency, dissipated power and internal device temperature. An innovative and low-cost measurement setup for the characterization of the device under low-frequency large-amplitude sinusoidal excitation is also presented. This setup allows the identification of the new low frequency model through suitable procedures explained in detail. In this thesis a new non-invasive empirical method for compact electrothermal modeling and thermal resistance extraction is also described. The new contribution of the proposed approach concerns the non linear dependence of the channel temperature on the dissipated power. This is very important for GaN devices since they are capable of operating at relatively high temperatures with high power densities and the dependence of the thermal resistance on the temperature is quite relevant. Finally a novel method for the device thermal simulation is investigated: based on the analytical solution of the tree-dimensional heat equation, a Visual Basic program has been developed to estimate, in real time, the temperature distribution on the hottest surface of planar multilayer structures. The developed solver is particularly useful for peak temperature estimation at the design stage when critical decisions about circuit design and packaging have to be made. It facilitates the layout optimization and reliability improvement, allowing the correct choice of the device geometry and configuration to achieve the best possible thermal performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three published papers are resumed in this thesis. Different aspects of the semiclassical theory of gravity are discussed. In chapter 1 we find a new perturbative (yet analytical) solution to the unsolved problem of the metric junction between two Friedmann-Robertson-Walker using Israel's formalism. The case of an expanding radiation core inside an expanding or collapsing dust exterior is treated. This model can be useful in the "landscape" cosmology in string theory or for treating new gravastar configurations. In chapter 2 we investigate the possible use of the Kodama vector field as a substitute for the Killing vector field. In particular we find the response function of an Unruh detector following an (accelerated) Kodama trajectory. The detector has finite extension and backreaction is considered. In chapter 3 we study the possible creation of microscopic black holes at LHC in the brane world model. It is found that the black hole tidal charge has a fundamental role in preventing the formation of the horizon.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

My research PhD work is focused on the Electrochemically Generated Luminescence (ECL) investigation of several different homogeneous and heterogeneous systems. ECL is a redox induced emission, a process whereby species, generated at electrodes, undergo a high-energy electron transfer reaction to form excited states that emit light. Since its first application, the ECL technique has become a very powerful analytical tool and has widely been used in biosensor transduction. ECL presents an intrinsically low noise and high sensitivity; moreover, the electrochemical generation of the excited state prevents scattering of the light source: for all these characteristics, it is an elective technique for ultrasensitive immunoassay detection. The majority of ECL systems involve species in solution where the emission occurs in the diffusion layer near to the electrode surface. However, over the past few years, an intense research has been focused on the ECL generated from species constrained on the electrode surface. The aim of my work is to study the behavior of ECL-generating molecular systems upon the progressive increase of their spatial constraints, that is, passing from isolated species in solution, to fluorophores embedded within a polymeric film and, finally, to patterned surfaces bearing “one-dimensional” emitting spots. In order to describe these trends, I use different “dimensions” to indicate the different classes of compounds. My thesis was mostly developed in the electrochemistry group of Bologna with the supervision of Prof Francesco Paolucci and Dr Massimo Marcaccio. With their help and also thanks to their long experience in the molecular and supramolecular ECL fields and in the surface investigations using scanning probe microscopy techniques, I was able to obtain the results herein described. Moreover, during my research work, I have established a new collaboration with the group of Nanobiotechnology of Prof. Robert Forster (Dublin City University) where I spent a research period. Prof. Forster has a broad experience in the biomedical field, especially he focuses his research on film surfaces biosensor based on the ECL transduction. This thesis can be divided into three sections described as follows: (i) in the fist section, homogeneous molecular and supramolecular ECL-active systems, either organic or inorganic species (i.e., corannulene, dendrimers and iridium metal complex), are described. Driving force for this kind of studies includes the search for new luminophores that display on one hand higher ECL efficiencies and on the other simple mechanisms for modulating intensity and energy of their emission in view of their effective use in bioconjugation applications. (ii) in the second section, the investigation of some heterogeneous ECL systems is reported. Redox polymers comprising inorganic luminophores were described. In such a context, a new conducting platform, based on carbon nanotubes, was developed aimed to accomplish both the binding of a biological molecule and its electronic wiring to the electrode. This is an essential step for the ECL application in the field of biosensors. (iii) in the third section, different patterns were produced on the electrode surface using a Scanning Electrochemical Microscopy. I developed a new methods for locally functionalizing an inert surface and reacting this surface with a luminescent probe. In this way, I successfully obtained a locally ECL active platform for multi-array application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I Max Bill is an intense giornata of a big fresco. An analysis of the main social, artistic and cultural events throughout the twentieth century is needed in order to trace his career through his masterpieces and architectures. Some of the faces of this hypothetical mural painting are, among others, Le Corbusier, Walter Gropius, Ernesto Nathan Rogers, Kandinskij, Klee, Mondrian, Vatongerloo, Ignazio Silone, while the backcloth is given by artistic avant-gardes, Bauhaus, International Exhibitions, CIAM, war events, reconstruction, Milan Triennali, Venice Biennali, the School of Ulm. Architect, even though more known as painter, sculptor, designer and graphic artist, Max Bill attends the Bauhaus as a student in the years 1927-1929, and from this experience derives the main features of a rational, objective, constructive and non figurative art. His research is devoted to give his art a scientific methodology: each work proceeds from the analysis of a problem to the logical and always verifiable solution of the same problem. By means of composition elements (such as rhythm, seriality, theme and its variation, harmony and dissonance), he faces, with consistent results, themes apparently very distant from each other as the project for the H.f.G. or the design for a font. Mathematics are a constant reference frame as field of certainties, order, objectivity: ‘for Bill mathematics are never confined to a simple function: they represent a climate of spiritual certainties, and also the theme of non attempted in its purest state, objectivity of the sign and of the geometrical place, and at the same time restlessness of the infinity: Limited and Unlimited ’. In almost sixty years of activity, experiencing all artistic fields, Max Bill works, projects, designs, holds conferences and exhibitions in Europe, Asia and Americas, confronting himself with the most influencing personalities of the twentieth century. In such a vast scenery, the need to limit the investigation field combined with the necessity to address and analyse the unpublished and original aspect of Bill’s relations with Italy. The original contribution of the present research regards this particular ‘geographic delimitation’; in particular, beyond the deep cultural exchanges between Bill and a series of Milanese architects, most of all with Rogers, two main projects have been addressed: the realtà nuova at Milan Triennale in 1947, and the Contemporary Art Museum in Florence in 1980. It is important to note that these projects have not been previously investigated, and the former never appears in the sources either. These works, together with the most well-known ones, such as the projects for the VI and IX Triennale, and the Swiss pavilion for the Biennale, add important details to the reference frame of the relations which took place between Zurich and Milan. Most of the occasions for exchanges took part in between the Thirties and the Fifties, years during which Bill underwent a significant period of artistic growth. He meets the Swiss progressive architects and the Paris artists from the Abstraction-Création movement, enters the CIAM, collaborates with Le Corbusier to the third volume of his Complete Works, and in Milan he works and gets confronted with the events related to post-war reconstruction. In these years Bill defines his own working methodology, attaining an artistic maturity in his work. The present research investigates the mentioned time period, despite some necessary exceptions. II The official Max Bill bibliography is naturally wide, including spreading works along with ones more devoted to analytical investigation, mainly written in German and often translated into French and English (Max Bill himself published his works in three languages). Few works have been published in Italian and, excluding the catalogue of the Parma exhibition from 1977, they cannot be considered comprehensive. Many publications are exhibition catalogues, some of which include essays written by Max Bill himself, some others bring Bill’s comments in a educational-pedagogical approach, to accompany the observer towards a full understanding of the composition processes of his art works. Bill also left a great amount of theoretical speculations to encourage a critical reading of his works in the form of books edited or written by him, and essays published in ‘Werk’, magazine of the Swiss Werkbund, and other international reviews, among which Domus and Casabella. These three reviews have been important tools of analysis, since they include tracks of some of Max Bill’s architectural works. The architectural aspect is less investigated than the plastic and pictorial ones in all the main reference manuals on the subject: Benevolo, Tafuri and Dal Co, Frampton, Allenspach consider Max Bill as an artist proceeding in his work from Bauhaus in the Ulm experience . A first filing of his works was published in 2004 in the monographic issue of the Spanish magazine 2G, together with critical essays by Karin Gimmi, Stanislaus von Moos, Arthur Rüegg and Hans Frei, and in ‘Konkrete Architektur?’, again by Hans Frei. Moreover, the monographic essay on the Atelier Haus building by Arthur Rüegg from 1997, and the DPA 17 issue of the Catalonia Polytechnic with contributions of Carlos Martì, Bruno Reichlin and Ton Salvadò, the latter publication concentrating on a few Bill’s themes and architectures. An urge to studying and going in depth in Max Bill’s works was marked in 2008 by the centenary of his birth and by a recent rediscovery of Bill as initiator of the ‘minimalist’ tradition in Swiss architecture. Bill’s heirs are both very active in promoting exhibitions, researching and publishing. Jakob Bill, Max Bill’s son and painter himself, recently published a work on Bill’s experience in Bauhaus, and earlier on he had published an in-depth study on ‘Endless Ribbons’ sculptures. Angela Thomas Schmid, Bill’s wife and art historian, published in end 2008 the first volume of a biography on Max Bill and, together with the film maker Eric Schmid, produced a documentary film which was also presented at the last Locarno Film Festival. Both biography and documentary concentrate on Max Bill’s political involvement, from antifascism and 1968 protest movements to Bill experiences as Zurich Municipality councilman and member of the Swiss Confederation Parliament. In the present research, the bibliography includes also direct sources, such as interviews and original materials in the form of letters correspondence and graphic works together with related essays, kept in the max+binia+jakob bill stiftung archive in Zurich. III The results of the present research are organized into four main chapters, each of them subdivided into four parts. The first chapter concentrates on the research field, reasons, tools and methodologies employed, whereas the second one consists of a short biographical note organized by topics, introducing the subject of the research. The third chapter, which includes unpublished events, traces the historical and cultural frame with particular reference to the relations between Max Bill and the Italian scene, especially Milan and the architects Rogers and Baldessari around the Fifties, searching the themes and the keys for interpretation of Bill’s architectures and investigating the critical debate on the reviews and the plastic survey through sculpture. The fourth and last chapter examines four main architectures chosen on a geographical basis, all devoted to exhibition spaces, investigating Max Bill’s composition process related to the pictorial field. Paintings has surely been easier and faster to investigate and verify than the building field. A doctoral thesis discussed in Lausanne in 1977 investigating Max Bill’s plastic and pictorial works, provided a series of devices which were corrected and adapted for the definition of the interpretation grid for the composition structures of Bill’s main architectures. Four different tools are employed in the investigation of each work: a context analysis related to chapter three results; a specific theoretical essay by Max Bill briefly explaining his main theses, even though not directly linked to the very same work of art considered; the interpretation grid for the composition themes derived from a related pictorial work; the architecture drawing and digital three-dimensional model. The double analysis of the architectural and pictorial fields is functional to underlining the relation among the different elements of the composition process; the two fields, however, cannot be compared and they stay, in Max Bill’s works as in the present research, interdependent though self-sufficient. IV An important aspect of Max Bill production is self-referentiality: talking of Max Bill, also through Max Bill, as a need for coherence instead of a method limitation. Ernesto Nathan Rogers describes Bill as the last humanist, and his horizon is the known world but, as the ‘Concrete Art’ of which he is one of the main representatives, his production justifies itself: Max Bill not only found a method, but he autonomously re-wrote the ‘rules of the game’, derived timeless theoretical principles and verified them through a rich and interdisciplinary artistic production. The most recurrent words in the present research work are synthesis, unity, space and logic. These terms are part of Max Bill’s vocabulary and can be referred to his works. Similarly, graphic settings or analytical schemes in this research text referring to or commenting Bill’s architectural projects were drawn up keeping in mind the concise precision of his architectural design. As for Mies van der Rohe, it has been written that Max Bill took art to ‘zero degree’ reaching in this way a high complexity. His works are a synthesis of art: they conceptually encompass all previous and –considered their developments- most of contemporary pictures. Contents and message are generally explicitly declared in the title or in Bill’s essays on his artistic works and architectural projects: the beneficiary is invited to go through and re-build the process of synthesis generating the shape. In the course of the interview with the Milan artist Getulio Alviani, he tells how he would not write more than a page for an essay on Josef Albers: everything was already evident ‘on the surface’ and any additional sentence would be redundant. Two years after that interview, these pages attempt to decompose and single out the elements and processes connected with some of Max Bill’s works which, for their own origin, already contain all possible explanations and interpretations. The formal reduction in favour of contents maximization is, perhaps, Max Bill’s main lesson.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piezoelectrics present an interactive electromechanical behaviour that, especially in recent years, has generated much interest since it renders these materials adapt for use in a variety of electronic and industrial applications like sensors, actuators, transducers, smart structures. Both mechanical and electric loads are generally applied on these devices and can cause high concentrations of stress, particularly in proximity of defects or inhomogeneities, such as flaws, cavities or included particles. A thorough understanding of their fracture behaviour is crucial in order to improve their performances and avoid unexpected failures. Therefore, a considerable number of research works have addressed this topic in the last decades. Most of the theoretical studies on this subject find their analytical background in the complex variable formulation of plane anisotropic elasticity. This theoretical approach bases its main origins in the pioneering works of Muskelishvili and Lekhnitskii who obtained the solution of the elastic problem in terms of independent analytic functions of complex variables. In the present work, the expressions of stresses and elastic and electric displacements are obtained as functions of complex potentials through an analytical formulation which is the application to the piezoelectric static case of an approach introduced for orthotropic materials to solve elastodynamics problems. This method can be considered an alternative to other formalisms currently used, like the Stroh’s formalism. The equilibrium equations are reduced to a first order system involving a six-dimensional vector field. After that, a similarity transformation is induced to reach three independent Cauchy-Riemann systems, so justifying the introduction of the complex variable notation. Closed form expressions of near tip stress and displacement fields are therefore obtained. In the theoretical study of cracked piezoelectric bodies, the issue of assigning consistent electric boundary conditions on the crack faces is of central importance and has been addressed by many researchers. Three different boundary conditions are commonly accepted in literature: the permeable, the impermeable and the semipermeable (“exact”) crack model. This thesis takes into considerations all the three models, comparing the results obtained and analysing the effects of the boundary condition choice on the solution. The influence of load biaxiality and of the application of a remote electric field has been studied, pointing out that both can affect to a various extent the stress fields and the angle of initial crack extension, especially when non-singular terms are retained in the expressions of the electro-elastic solution. Furthermore, two different fracture criteria are applied to the piezoelectric case, and their outcomes are compared and discussed. The work is organized as follows: Chapter 1 briefly introduces the fundamental concepts of Fracture Mechanics. Chapter 2 describes plane elasticity formalisms for an anisotropic continuum (Eshelby-Read-Shockley and Stroh) and introduces for the simplified orthotropic case the alternative formalism we want to propose. Chapter 3 outlines the Linear Theory of Piezoelectricity, its basic relations and electro-elastic equations. Chapter 4 introduces the proposed method for obtaining the expressions of stresses and elastic and electric displacements, given as functions of complex potentials. The solution is obtained in close form and non-singular terms are retained as well. Chapter 5 presents several numerical applications aimed at estimating the effect of load biaxiality, electric field, considered permittivity of the crack. Through the application of fracture criteria the influence of the above listed conditions on the response of the system and in particular on the direction of crack branching is thoroughly discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I applied the SBAS-DInSAR method to the Mattinata Fault (MF) (Southern Italy) and to the Doruneh Fault System (DFS) (Central Iran). In the first case, I observed limited internal deformation and determined the right lateral kinematic pattern with a compressional pattern in the northern sector of the fault. Using the Okada model I inverted the observed velocities defining a right lateral strike slip solution for the MF. Even if it fits the data within the uncertainties, the modeled slip rate of 13-15 mm yr-1 seems too high with respect to the geological record. Concerning the Western termination of DFS, SAR data confirms the main left lateral transcurrent kinematics of this fault segment, but reveal a compressional component. My analytical model fits successfully the observed data and quantifies the slip in ~4 mm yr-1 and ~2.5 mm yr-1 of pure horizontal and vertical displacement respectively. The horizontal velocity is compatible with geological record. I applied classic SAR interferometry to the October–December 2008 Balochistan (Central Pakistan) seismic swarm; I discerned the different contributions of the three Mw > 5.7 earthquakes determining fault positions, lengths, widths, depths and slip distributions, constraining the other source parameters using different Global CMT solutions. A well constrained solution has been obtained for the 09/12/2008 aftershock, whereas I tested two possible fault solutions for the 28-29/10/08 mainshocks. It is not possible to favor one of the solutions without independent constraints derived from geological data. Finally I approached the study of the earthquake-cycle in transcurrent tectonic domains using analog modeling, with alimentary gelatins like crust analog material. I successfully joined the study of finite deformation with the earthquake cycle study and sudden dislocation. A lot of seismic cycles were reproduced in which a characteristic earthquake is recognizable in terms of displacement, coseismic velocity and recurrence time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research has focused on the study of the behavior and of the collapse of masonry arch bridges. The latest decades have seen an increasing interest in this structural type, that is still present and in use, despite the passage of time and the variation of the transport means. Several strategies have been developed during the time to simulate the response of this type of structures, although even today there is no generally accepted standard one for assessment of masonry arch bridges. The aim of this thesis is to compare the principal analytical and numerical methods existing in literature on case studies, trying to highlight values and weaknesses. The methods taken in exam are mainly three: i) the Thrust Line Analysis Method; ii) the Mechanism Method; iii) the Finite Element Methods. The Thrust Line Analysis Method and the Mechanism Method are analytical methods and derived from two of the fundamental theorems of the Plastic Analysis, while the Finite Element Method is a numerical method, that uses different strategies of discretization to analyze the structure. Every method is applied to the case study through computer-based representations, that allow a friendly-use application of the principles explained. A particular closed-form approach based on an elasto-plastic material model and developed by some Belgian researchers is also studied. To compare the three methods, two different case study have been analyzed: i) a generic masonry arch bridge with a single span; ii) a real masonry arch bridge, the Clemente Bridge, built on Savio River in Cesena. In the analyses performed, all the models are two-dimensional in order to have results comparable between the different methods taken in exam. The different methods have been compared with each other in terms of collapse load and of hinge positions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microparticelle a base di complessi polielettrolitici di Chitosano/Pectina per il rilascio nasale di Tacrina cloridrato. Lo scopo di questo studio è stata la ricerca di nuove formulazioni solide per la somministrazione nasale di Tacrina cloridrato allo scopo di ridurre l’eccessivo effetto di primo passaggio epatico ed aumentarne la biodisponibilità a livello del Sistema Nervoso Centrale. La Tacrina è stata incapsulata in microparticelle mucoadesive a base di complessi elettrolitici di chitosano e pectina. Le microparticelle sono state preparate mediante due diversi approcci tecnologici (spray-drying e spray-drying/liofilizzazione) e analizzate in termini di caratteristiche dimensionali, morfologiche e chimico-fisiche. Nanoparticelle di Chitosano reticolate con Sodio Cromoglicato per il trattamento della rinite allergica. Il Sodio Cromoglicato è uno dei farmaci utilizzati per il trattamento della rinite allergica. Come noto, la clearance mucociliare provoca una rapida rimozione dei farmaci in soluzione dalla cavità nasale, aumentando così il numero di somministrazioni giornaliere e, di conseguenza, riducendo la compliance del paziente. Per ovviare a tale problema, si è pensato di includere il sodio cromoglicato in nanoparticelle di chitosano, un polimero capace di aderire alla mucosa nasale, prolungare il contatto della formulazione con il sito di applicazione e ridurre il numero di somministrazioni giornaliere. Le nanoparticelle ottenute sono state caratterizzate in termini di dimensioni, resa, efficienza di incapsulazione e caricamento del farmaco, potenziale zeta e caratteristiche mucoadesive. Analisi quantitativa di Budesonide amorfa tramite calorimetria a scansione differenziale. È stato sviluppato un nuovo metodo quantitativo allo stato solido basato sulla Calorimetria a Scansione Differenziale (DSC) in grado di quantificare in modo selettivo e accurato la quantità di Budesonide amorfa presente in una miscela solida. Durante lo sviluppo del metodo sono stati affrontati problemi relativi alla convalida di metodi analitici su campioni solidi quali la miscelazione di polveri solide per la preparazione di miscele standard e il calcolo della precisione.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis, after presenting recent advances obtained for the two-dimensional bin packing problem, focuses on the case where guillotine restrictions are imposed. A mathematical characterization of non-guillotine patterns is provided and the relation between the solution value of the two-dimensional problem with guillotine restrictions and the two-dimensional problem unrestricted is being studied from a worst-case perspective. Finally it presents a new heuristic algorithm, for the two-dimensional problem with guillotine restrictions, based on partial enumeration, and computationally evaluates its performance on a large set of instances from the literature. Computational experiments show that the algorithm is able to produce proven optimal solutions for a large number of problems, and gives a tight approximation of the optimum in the remaining cases.