15 resultados para Bulk parameter approach

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes modelling tools and methods suited for complex systems (systems that typically are represented by a plurality of models). The basic idea is that all models representing the system should be linked by well-defined model operations in order to build a structured repository of information, a hierarchy of models. The port-Hamiltonian framework is a good candidate to solve this kind of problems as it supports the most important model operations natively. The thesis in particular addresses the problem of integrating distributed parameter systems in a model hierarchy, and shows two possible mechanisms to do that: a finite-element discretization in port-Hamiltonian form, and a structure-preserving model order reduction for discretized models obtainable from commercial finite-element packages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leber’s hereditary optic neuropathy (LHON) is a mitochondrial disease characterized by a rapid loss of central vision and optic atrophy, due to the selective degeneration of retinal ganglion cells. The age of onset is around 20, and the degenerative process is fast and usually the second eye becomes affected in weeks or months. Even if this pathology is well known and has been well characterized, there are still open questions on its pathophysiology, such as the male prevalence, the incomplete penetrance and the tissue selectivity. This maternally inherited disease is caused by mutations in mitochondrial encoded genes of NADH ubiquinone oxidoreductase (complex I) of the respiratory chain. The 90% of LHON cases are caused by one of the three common mitochondrial DNA mutations (11778/ND4, 14484/ND6 and 3460/ND1) and the remaining 10% is caused by rare pathogenic mutations, reported in literature in one or few families. Moreover, there is also a small subset of patients reported with new putative pathogenic nucleotide changes, which awaits to be confirmed. We here clarify some molecular aspects of LHON, mainly the incomplete penetrance and the role of rare mtDNA mutations or variants on LHON expression, and attempt a possible therapeutic approach using the cybrids cell model. We generated novel structural models for mitochondrial encoded complex I subunits and a conservation analysis and pathogenicity prediction have been carried out for LHON reported mutations. This in-silico approach allowed us to locate LHON pathogenic mutations in defined and conserved protein domains and can be a useful tool in the analysis of novel mtDNA variants with unclear pathogenic/functional role. Four rare LHON pathogenic mutations have been identified, confirming that the ND1 and ND6 genes are mutational hot spots for LHON. All mutations were previously described at least once and we validated their pathogenic role, suggesting the need for their screening in LHON diagnostic protocols. Two novel mtDNA variants with a possible pathogenic role have been also identified in two independent branches of a large pedigree. Functional studies are necessary to define their contribution to LHON in this family. It also been demonstrated that the combination of mtDNA rare polymorphic variants is relevant in determining the maternal recurrence of myoclonus in unrelated LHON pedigrees. Thus, we suggest that particular mtDNA backgrounds and /or the presence of specific rare mutations may increase the pathogenic potential of the primary LHON mutations, thereby giving rise to the extraocular clinical features characteristic of the LHON “plus” phenotype. We identified the first molecular parameter that clearly discriminates LHON affected individuals from asymptomatic carriers, the mtDNA copy number. This provides a valuable mechanism for future investigations on variable penetrance in LHON. However, the increased mtDNA content in LHON individuals was not correlated to the functional polymorphism G1444A of PGC-1 alpha, the master regulator of mitochondrial biogenesis, but may be due to gene expression of genes involved in this signaling pathway, such as PGC-1 alpha/beta and Tfam. Future studies will be necessary to identify the biochemical effects of rare pathogenic mutations and to validate the novel candidate mutations here described, in terms of cellular bioenergetic characterization of these variants. Moreover, we were not able to induce mitochondrial biogenesis in cybrids cell lines using bezafibrate. However, other cell line models are available, such as fibroblasts harboring LHON mutations, or other approaches can be used to trigger the mitochondrial biogenesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this PhD thesis is to investigate the orientational and dynamical properties of liquid crystalline systems, at molecular level and using atomistic computer simulations, to reach a better understanding of material behavior from a microscopic point view. In perspective this should allow to clarify the relation between the micro and macroscopic properties with the objective of predicting or confirming experimental results on these systems. In this context, we developed four different lines of work in the thesis. The first one concerns the orientational order and alignment mechanism of rigid solutes of small dimensions dissolved in a nematic phase formed by the 4-pentyl,4 cyanobiphenyl (5CB) nematic liquid crystal. The orientational distribution of solutes have been obtained with Molecular Dynamics Simulation (MD) and have been compared with experimental data reported in literature. we have also verified the agreement between order parameters and dipolar coupling values measured in NMR experiments. The MD determined effective orientational potentials have been compared with the predictions of Maier­Saupe and Surface tensor models. The second line concerns the development of a correct parametrization able to reproduce the phase transition properties of a prototype of the oligothiophene semiconductor family: sexithiophene (T6). T6 forms two crystalline polymorphs largely studied, and possesses liquid crystalline phases still not well characterized, From simulations we detected a phase transition from crystal to liquid crystal at about 580 K, in agreement with available experiments, and in particular we found two LC phases, smectic and nematic. The crystal­smectic transition is associated to a relevant density variation and to strong conformational changes of T6, namely the molecules in the liquid crystal phase easily assume a bent shape, deviating from the planar structure typical of the crystal. The third line explores a new approach for calculating the viscosity in a nematic through a virtual exper- iment resembling the classical falling sphere experiment. The falling sphere is replaced by an hydrogenated silicon nanoparticle of spherical shape suspended in 5CB, and gravity effects are replaced by a constant force applied to the nanoparticle in a selected direction. Once the nanoparticle reaches a constant velocity, the viscosity of the medium can be evaluated using Stokes' law. With this method we successfully reproduced experimental viscosities and viscosity anisotropy for the solvent 5CB. The last line deals with the study of order induction on nematic molecules by an hydrogenated silicon surface. Gaining predicting power for the anchoring behavior of liquid crystals at surfaces will be a very desirable capability, as many properties related to devices depend on molecular organization close to surfaces. Here we studied, by means of atomistic MD simulations, the flat interface between an hydrogenated (001) silicon surface in contact with a sample of 5CB molecules. We found a planar anchoring of the first layers of 5CB where surface interactions are dominating with respect to the mesogen intermolecular interactions. We also analyzed the interface 5CB­vacuum, finding a homeotropic orientation of the nematic at this interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The possibility of combining different functionalities in a single device is of great relevance for further development of organic electronics in integrated components and circuitry. Organic light-emitting transistors (OLETs) have been demonstrated to be able to combine in a single device the electrical switching functionality of a field-effect transistor and the capability of light generation. A novel strategy in OLET realization is the tri-layer vertical hetero-junction. This configuration is similar to the bi-layer except for the presence of a new middle layer between the two transport layers. This “recombination” layer presents high emission quantum efficiency and OLED-like (Organic Light-Emitting Diode) vertical bulk mobility value. The key idea of the vertical tri-layer hetero-junction approach in realizing OLETs is that each layer has to be optimized according to its specific function (charge transport, energy transfer, radiative exciton recombination). Clearly, matching the overall device characteristics with the functional properties of the single materials composing the active region of the OFET, is a great challenge that requires a deep investigation of the morphological, optical and electrical features of the system. As in the case of the bi-layer based OLETs, it is clear that the interfaces between the dielectric and the bottom transport layer and between the recombination and the top transport layer are crucial for guaranteeing good ambipolar field-effect electrical characteristics. Moreover interfaces between the bottom transport and the recombination layer and between the recombination and the top transport layer should provide the favourable conditions for the charge percolation to happen in the recombination layer and form excitons. Organic light emitting transistor based on the tri-layer approach with external quantum efficiency out-performing the OLED state of the art has been recently demonstrated [Capelli et al., Nat. Mater. 9 (2010) 496-503] widening the scientific and technological interest in this field of research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanotechnologies are rapidly expanding because of the opportunities that the new materials offer in many areas such as the manufacturing industry, food production, processing and preservation, and in the pharmaceutical and cosmetic industry. Size distribution of the nanoparticles determines their properties and is a fundamental parameter that needs to be monitored from the small-scale synthesis up to the bulk production and quality control of nanotech products on the market. A consequence of the increasing number of applications of nanomaterial is that the EU regulatory authorities are introducing the obligation for companies that make use of nanomaterials to acquire analytical platforms for the assessment of the size parameters of the nanomaterials. In this work, Asymmetrical Flow Field-Flow Fractionation (AF4) and Hollow Fiber F4 (HF5), hyphenated with Multiangle Light Scattering (MALS) are presented as tools for a deep functional characterization of nanoparticles. In particular, it is demonstrated the applicability of AF4-MALS for the characterization of liposomes in a wide series of mediums. Afterwards the technique is used to explore the functional features of a liposomal drug vector in terms of its biological and physical interaction with blood serum components: a comprehensive approach to understand the behavior of lipid vesicles in terms of drug release and fusion/interaction with other biological species is described, together with weaknesses and strength of the method. Afterwards the size characterization, size stability, and conjugation of azidothymidine drug molecules with a new generation of metastable drug vectors, the Metal Organic Frameworks, is discussed. Lastly, it is shown the applicability of HF5-ICP-MS for the rapid screening of samples of relevant nanorisk: rather than a deep and comprehensive characterization it this time shown a quick and smart methodology that within few steps provides qualitative information on the content of metallic nanoparticles in tattoo ink samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we have dealt with several problems concerning liquid crystals (LC) phases, either in the bulk or at their interfaces, by the use of atomistic molecular dynamics (MD) simulations. We first focused our attention on simulating and characterizing the bulk smectic phase of 4-n-octyl-4'-cyanobiphenyl (8CB), allowing us to investigate the antiparallel molecular arrangement typical of SmAd smectic phases. A second topic of study was the characterization of the 8CB interface with vacuum by simulating freely suspended thin films, which allowed us to determine the influence of the interface on the orientational and positional order. Then we investigated the LC-water and LC-electrolyte water solution interface. This interface has recently found application in the development of sensors for several compounds, including biological molecules, and here we tried to understand the re-orientation mechanism of LC molecules at the interface which is behind the functioning of these sensors. The characterization of this peculiar interface has incidentally led us to develop a polarizable force field for the pentyl-cyanobiphenyl mesogen, whose process of parametrization and validation is reported here in detail. We have shown that this force field is a significant improvement over its previous, static charge non polarizable version in terms of density, orientational order parameter and translational diffusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new control scheme has been presented in this thesis. Based on the NonLinear Geometric Approach, the proposed Active Control System represents a new way to see the reconfigurable controllers for aerospace applications. The presence of the Diagnosis module (providing the estimation of generic signals which, based on the case, can be faults, disturbances or system parameters), mean feature of the depicted Active Control System, is a characteristic shared by three well known control systems: the Active Fault Tolerant Controls, the Indirect Adaptive Controls and the Active Disturbance Rejection Controls. The standard NonLinear Geometric Approach (NLGA) has been accurately investigated and than improved to extend its applicability to more complex models. The standard NLGA procedure has been modified to take account of feasible and estimable sets of unknown signals. Furthermore the application of the Singular Perturbations approximation has led to the solution of Detection and Isolation problems in scenarios too complex to be solved by the standard NLGA. Also the estimation process has been improved, where multiple redundant measuremtent are available, by the introduction of a new algorithm, here called "Least Squares - Sliding Mode". It guarantees optimality, in the sense of the least squares, and finite estimation time, in the sense of the sliding mode. The Active Control System concept has been formalized in two controller: a nonlinear backstepping controller and a nonlinear composite controller. Particularly interesting is the integration, in the controller design, of the estimations coming from the Diagnosis module. Stability proofs are provided for both the control schemes. Finally, different applications in aerospace have been provided to show the applicability and the effectiveness of the proposed NLGA-based Active Control System.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of polymorphism has an important role in several fields of materials science, because structural differences lead to different physico-chemical properties of the system. This PhD work was dedicated to the investigation of polymorphism in Indigo, Thioindigo and Quinacridone, as case studies among the organic pigments employed as semiconductors, and in Paracetamol, Phenytoin and Nabumetone, chosen among some commonly used API. The aim of the research was to improve the understanding on the structures of bulk crystals and thin films, adopting Raman spectroscopy as the method of choice, while resorting to other experimental techniques to complement the gathered information. Different crystalline polymorphs, in fact, may be conveniently distinguished by their Raman spectra in the region of the lattice phonons (10-150 cm-1), the frequencies of which, probing the inter-molecular interactions, are very sensitive to even slight modifications in the molecular packing. In particular, we have used Confocal Raman Microscopy, which is a powerful, yet simple, technique for the investigation of crystal polymorphism in organic and inorganic materials, being capable of monitoring physical modifications, chemical transformations and phase inhomogeneities in crystal domains at the micrometre scale. In this way, we have investigated bulk crystals and thin film samples obtained with a variety of crystal growth and deposition techniques. Pure polymorphs and samples with phase mixing were found and fully characterized. Raman spectroscopy was complemented mainly by XRD measurements for bulk crystals and by AFM, GIXD and TEM for thin films. Structures and phonons of the investigated polymorphs were computed by DFT methods, and the comparison between theoretical and experimental results was used to assess the relative stability of the polymorphs and to assist the spectroscopic investigation. The Raman measurements were thus found to be able to clarify ambiguities in the phase assignments which otherwise the other methods were unable to solve.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives CO2-EVAR was proposed for treatment of AAA especially in patients with CKD. Issues regarding standardization, such as visualization of lowest renal artery (LoRA) and quality image in angiographies performed from pigtail or introducer-sheath, are still unsolved. Aim of the study was to analyze different steps of CO2-EVAR to create an operative protocol to standardize the procedure. Methods Patients undergoing CO2-EVAR were prospectively enrolled in 5 European centers (2018-2021). CO2-EVAR was performed using an automated injector. LoRA visualization and image quality (1-4) were analyzed and compared at different procedure steps: preoperative CO2-angiography from Pigtail/Introducer-sheath (1st Step), angiographies from Pigtail at 0%,50%,100% main body (MB) deployment (2nd Step), contralateral hypogastric artery (CHA) visualization with CO2 injection from femoral Introducer-sheath (3rd Step) and completion angiogram from Pigtail/Introducer-sheath (4th Step). Intra-/postoperative adverse events were evaluated. Results Sixty-five patients undergoing CO2-EVAR were enrolled, 55/65(84.5%) male, median age 75(11.5) years. Median ICM was 20(54)cc; 19/65(29.2%) procedures were performed with 0-iodine. 1st Step: median image quality was significantly higher with CO2 injected from femoral introducer [Pigtail2(3)vs.3(3)Introducer,p=.008]. 2nd Step: LoRA was more frequently detected at 50% (93%vs.73.2%, p=.002) and 100% (94.1%vs.78.4%, p=.01) of MB deployment compared with first angiography from Pigtail; image quality was significantly higher at 50% [3(3)vs.2(3),p=<.001] and 100% [4(3) vs.2(3),p=.001] of MB deployment. CHA was detected in 93% cases (3rd Step). Mean image quality was significantly higher when final angiogram (4th Step) was performed from introducer (Pigtail2.6±1.1vs.3.1±0.9Introducer,p=<.001). Rates of intra-/postoperative adverse events (pain,vomit,diarrhea) were 7.7% and 12.5%. Conclusions Preimplant CO2-angiography should be performed from Introducer-sheath. MB steric bulk during its deployment should be used to improve image quality and LoRA visualization with CO2. CHA can be satisfactorily visualized with CO2. Completion CO2-angiogram should be performed from femoral Introducer-sheath. This operative protocol allows to perform CO2-EVAR with minimal ICM and low rate of mild complications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main topic of this thesis is confounding in linear regression models. It arises when a relationship between an observed process, the covariate, and an outcome process, the response, is influenced by an unmeasured process, the confounder, associated with both. Consequently, the estimators for the regression coefficients of the measured covariates might be severely biased, less efficient and characterized by misleading interpretations. Confounding is an issue when the primary target of the work is the estimation of the regression parameters. The central point of the dissertation is the evaluation of the sampling properties of parameter estimators. This work aims to extend the spatial confounding framework to general structured settings and to understand the behaviour of confounding as a function of the data generating process structure parameters in several scenarios focusing on the joint covariate-confounder structure. In line with the spatial statistics literature, our purpose is to quantify the sampling properties of the regression coefficient estimators and, in turn, to identify the most prominent quantities depending on the generative mechanism impacting confounding. Once the sampling properties of the estimator conditionally on the covariate process are derived as ratios of dependent quadratic forms in Gaussian random variables, we provide an analytic expression of the marginal sampling properties of the estimator using Carlson’s R function. Additionally, we propose a representative quantity for the magnitude of confounding as a proxy of the bias, its first-order Laplace approximation. To conclude, we work under several frameworks considering spatial and temporal data with specific assumptions regarding the covariance and cross-covariance functions used to generate the processes involved. This study allows us to claim that the variability of the confounder-covariate interaction and of the covariate plays the most relevant role in determining the principal marker of the magnitude of confounding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research activity aims at providing a reliable estimation of particular state variables or parameters concerning the dynamics and performance optimization of a MotoGP-class motorcycle, integrating the classical model-based approach with new methodologies involving artificial intelligence. The first topic of the research focuses on the estimation of the thermal behavior of the MotoGP carbon braking system. Numerical tools are developed to assess the instantaneous surface temperature distribution in the motorcycle's front brake discs. Within this application other important brake parameters are identified using Kalman filters, such as the disc convection coefficient and the power distribution in the disc-pads contact region. Subsequently, a physical model of the brake is built to estimate the instantaneous braking torque. However, the results obtained with this approach are highly limited by the knowledge of the friction coefficient (μ) between the disc rotor and the pads. Since the value of μ is a highly nonlinear function of many variables (namely temperature, pressure and angular velocity of the disc), an analytical model for the friction coefficient estimation appears impractical to establish. To overcome this challenge, an innovative hybrid solution is implemented, combining the benefit of artificial intelligence (AI) with classical model-based approach. Indeed, the disc temperature estimated through the thermal model previously implemented is processed by a machine learning algorithm that outputs the actual value of the friction coefficient thus improving the braking torque computation performed by the physical model of the brake. Finally, the last topic of this research activity regards the development of an AI algorithm to estimate the current sideslip angle of the motorcycle's front tire. While a single-track motorcycle kinematic model and IMU accelerometer signals theoretically enable sideslip calculation, the presence of accelerometer noise leads to a significant drift over time. To address this issue, a long short-term memory (LSTM) network is implemented.