61 resultados para Ph D


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual correspondence is a key computer vision task that aims at identifying projections of the same 3D point into images taken either from different viewpoints or at different time instances. This task has been the subject of intense research activities in the last years in scenarios such as object recognition, motion detection, stereo vision, pattern matching, image registration. The approaches proposed in literature typically aim at improving the state of the art by increasing the reliability, the accuracy or the computational efficiency of visual correspondence algorithms. The research work carried out during the Ph.D. course and presented in this dissertation deals with three specific visual correspondence problems: fast pattern matching, stereo correspondence and robust image matching. The dissertation presents original contributions to the theory of visual correspondence, as well as applications dealing with 3D reconstruction and multi-view video surveillance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this Ph.D. project has been the design and characterization of new and more efficient luminescent tools, in particular sensors and labels, for analytical chemistry, medical diagnostics and imaging. Actually both the increasing temporal and spatial resolutions that are demanded by those branches, coupled to a sensitivity that is required to reach the single molecule resolution, can be provided by the wide range of techniques based on luminescence spectroscopy. As far as the development of new chemical sensors is concerned, as chemists we were interested in the preparation of new, efficient, sensing materials. In this context, we kept developing new molecular chemosensors, by exploiting the supramolecular approach, for different classes of analytes. In particular we studied a family of luminescent tetrapodal-hosts based on aminopyridinium units with pyrenyl groups for the detection of anions. These systems exhibited noticeable changes in the photophysical properties, depending on the nature of the anion; in particular, addition of chloride resulted in a conformational change, giving an initial increase in excimeric emission. A good selectivity for dicarboxylic acid was also found. In the search for higher sensitivities, we moved our attention also to systems able to perform amplification effects. In this context we described the metal ion binding properties of three photoactive poly-(arylene ethynylene) co-polymers with different complexing units and we highlighted, for one of them, a ten-fold amplification of the response in case of addition of Zn2+, Cu2+ and Hg2+ ions. In addition, we were able to demonstrate the formation of complexes with Yb3+ an Er3+ and an efficient sensitization of their typical metal centered NIR emission upon excitation of the polymer structure, this feature being of particular interest for their possible applications in optical imaging and in optical amplification for telecommunication purposes. An amplification effect was also observed during this research in silica nanoparticles derivatized with a suitable zinc probe. In this case we were able to prove, for the first time, that nanoparticles can work as “off-on” chemosensors with signal amplification. Fluorescent silica nanoparticles can be thus seen as innovative multicomponent systems in which the organization of photophysically active units gives rise to fruitful collective effects. These precious effects can be exploited for biological imaging, medical diagnostic and therapeutics, as evidenced also by some results reported in this thesis. In particular, the observed amplification effect has been obtained thanks to a suitable organization of molecular probe units onto the surface of the nanoparticles. In the effort of reaching a deeper inside in the mechanisms which lead to the final amplification effects, we also attempted to find a correlation between the synthetic route and the final organization of the active molecules in the silica network, and thus with those mutual interactions between one another which result in the emerging, collective behavior, responsible for the desired signal amplification. In this context, we firstly investigated the process of formation of silica nanoparticles doped with pyrene derivative and we showed that the dyes are not uniformly dispersed inside the silica matrix; thus, core-shell structures can be formed spontaneously in a one step synthesis. Moreover, as far as the design of new labels is concerned, we reported a new synthetic approach to obtain a class of robust, biocompatible silica core-shell nanoparticles able to show a long-term stability. Taking advantage of this new approach we also showed the synthesis and photophysical properties of core-shell NIR absorbing and emitting materials that proved to be very valuable for in-vivo imaging. In general, the dye doped silica nanoparticles prepared in the framework of this project can conjugate unique properties, such as a very high brightness, due to the possibility to include many fluorophores per nanoparticle, high stability, because of the shielding effect of the silica matrix, and, to date, no toxicity, with a simple and low-cost preparation. All these features make these nanostructures suitable to reach the low detection limits that are nowadays required for effective clinical and environmental applications, fulfilling in this way the initial expectations of this research project.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Autism is a neurodevelpmental disorder characterized by impaired verbal communication, limited reciprocal social interaction, restricted interests and repetitive behaviours. Twin and family studies indicate a large genetic contribution to ASDs (Autism Spectrum Disorders). During my Ph.D. I have been involved in several projects in which I used different genetic approaches in order to identify susceptibility genes in autism on chromosomes 2, 7 and X: 1)High-density SNP association and CNV analysis of two Autism Susceptibility Loci. The International Molecular Genetic Study of Autism Consortium (IMGSAC) previously identified linkage loci on chromosomes 7 and 2, termed AUTS1 and AUTS5, respectively. In this study, we evaluated the patterns of linkage disequilibrium (LD) and the distribution of haplotype blocks, utilising data from the HapMap project, across the two strongest peaks of linkage on chromosome 2 and 7. More than 3000 SNPs have been selected in each locus in all known genes, as well as SNPs in non-genic highly conserved sequences. All markers have been genotyped to perform a high-density association analysis and to explore copy number variation within these regions. The study sample consisted of 127 and 126 multiplex families, showing linkage to the AUTS1 and AUTS5 regions, respectively, and 188 gender-matched controls. Association and CNV analysis implicated several new genes, including IMMP2L and DOCK4 on chromosome 7 and ZNF533 and NOSTRIN on the chromosome 2. Particularly, my contribution to this project focused on the characterization of the best candidate gene in each locus: On the AUTS5 locus I carried out a transcript study of ZNF533 in different human tissues to verify which isoforms and start exons were expressed. High transcript variability and a new exon, never described before, has been identified in this analysis. Furthermore, I selected 31 probands for the risk haplotype and performed a mutation screen of all known exons in order to identify novel coding variants associated to autism. On the AUTS1 locus a duplication was detected in one multiplex family that was transmitted from father to an affected son. This duplication interrupts two genes: IMMP2L and DOCK4 and warranted further analysis. Thus, I performed a screening of the cohort of IMGSAC collection (285 multiplex families), using a QMPSF assay (Quantitative Multiplex PCR of Short fluorescent Fragments) to analyse if CNVs in this genic region segregate with autism phenotype and compare their frequency with a sample of 475 UK controls. Evidence for a role of DOCK4 in autism susceptibility was supported by independent replication of association at rs2217262 and the finding of a deletion segregating in a sib-pair family. 2)Analysis of X chromosome inactivation. Skewed X chromosome inactivation (XCI) is observed in females carrying gene mutations involved in several X-linked syndromes. We aimed to estimate the role of X-linked genes in ASD susceptibility by ascertaining the XCI pattern in a sample of 543 informative mothers of children with ASD and in a sample of 164 affected girls. The study sample included families from different european consortia. I analysed the XCI inactivation pattern in a sample of italian mothers from singletons families with ASD and also a control groups (144 adult females and 40 young females). We observed no significant excess of skewed XCI in families with ASD. Interestingly, two mothers and one girl carrying known mutations in X-linked genes (NLGN3, ATRX, MECP2) showed highly skewed XCI, suggesting that ascertainment of XCI could reveal families with X-linked mutations. Linkage analysis was carried out in the subgroup of multiplex families with skewed XCI (≥80:20) and a modest increased allele sharing was obtained in the Xq27-Xq28 region, with a peak Z score of 1.75 close to rs719489. In this region FMR1 and MECP2 have been associated in some cases with austim and therefore represent candidates for the disorder. I performed a mutation screen of MECP2 in 33 unrelated probands from IMGSAC and italian families, showing XCI skewness. Recently, Xq28 duplications including MECP2, have been identified in families with MR, with asymptomatic carrier females showing extreme (>85%) skewing of XCI. For these reason I used the sample of probands from X-skewed families to perform CNV analysis by Real-time quantitative PCR. No duplications have been found in our sample. I have also confirmed all data using as alternative method the MLPA assay (Multiplex Ligation dependent Probe Amplification). 3)ASMT as functional candidate gene for autism. Recently, a possible involvement of the acetylserotonin O-methyltransferase (ASMT) gene in susceptibility to ASDs has been reported: mutation screening of the ASMT gene in 250 individuals from the PARIS collection revealed several rare variants with a likely functional role; Moreover, significant association was reported for two SNPs (rs4446909 and rs5989681) located in one of the two alternative promoters of the gene. To further investigate these findings, I carried out a replication study using a sample of 263 affected individuals from the IMGSAC collection and 390 control individuals. Several rare mutations were identified, including the splice site mutation IVS5+2T>C and the L326F substitution previously reported by Melke et al (2007), but the same rare variants have been found also in control individuals in our study. Interestingly, a new R319X stop mutation was found in a single autism proband of Italian origin and is absent from the entire control sample. Furthermore, no replication has been found in our case-control study typing the SNPs on the ASMT promoter B.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Ph.D. dissertation analyses the reasons for which political actors (governments, legislatures and political parties) decide consciously to give away a source of power by increasing the political significance of the courts. It focuses on a single case of particular significance: the passage of the Constitutional Reform Act 2005 in the United Kingdom. This Act has deeply changed the governance and the organization of the English judicial system, has provided a much clearer separation of powers and a stronger independence of the judiciary from the executive and the legislative. What’s more, this strengthening of the judicial independence has been decided in a period in which the political role of the English judges was evidently increasing. I argue that the reform can be interpreted as a «paradigm shift» (Hall 1993), that has changed the way in which the judicial power is considered. The most diffused conceptions in the sub-system of the English judicial policies are shifted, and a new paradigm has become dominant. The new paradigm includes: (i) stronger separation of powers, (ii) collective (as well as individual) conception of the independence of the judiciary, (iii) reduction of the political accountability of the judges, (iv) formalization of the guarantees of judicial independence, (v) principle-driven (instead of pragmatic) approach to the reforms, and (vi) transformation of a non-codified constitution in a codified one. Judicialization through political decisions represent an important, but not fully explored, field of research. The literature, in particular, has focused on factors unable to explain the English case: the competitiveness of the party system (Ramseyer 1994), the political uncertainty at the time of constitutional design (Ginsburg 2003), the cultural divisions within the polity (Hirschl 2004), federal institutions and division of powers (Shapiro 2002). All these contributes link the decision to enhance the political relevance of the judges to some kind of diffusion of political power. In the contemporary England, characterized by a relative high concentration of power in the government, the reasons for such a reform should be located elsewhere. I argue that the Constitutional Reform Act 2005 can be interpreted as a result of three different kinds of reasons: (i) the social and demographical transformations of the English judiciary, which have made inefficient most of the precedent mechanism of governance, (ii) the role played by the judges in the policy process and (iii) the cognitive and normative influences originated from the European context, as a consequence of the membership of the United Kingdom to the European Union and the Council of Europe. My thesis is that only a full analysis of all these three aspects can explain the decision to reform the judicial system and the content of the Constitutional Reform Act 2005. Only the cultural influences come from the European legal complex, above all, can explain the paradigm shift previously described.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ever-increasing spread of automation in industry puts the electrical engineer in a central role as a promoter of technological development in a sector such as the use of electricity, which is the basis of all the machinery and productive processes. Moreover the spread of drives for motor control and static converters with structures ever more complex, places the electrical engineer to face new challenges whose solution has as critical elements in the implementation of digital control techniques with the requirements of inexpensiveness and efficiency of the final product. The successfully application of solutions using non-conventional static converters awake an increasing interest in science and industry due to the promising opportunities. However, in the same time, new problems emerge whose solution is still under study and debate in the scientific community During the Ph.D. course several themes have been developed that, while obtaining the recent and growing interest of scientific community, have much space for the development of research activity and for industrial applications. The first area of research is related to the control of three phase induction motors with high dynamic performance and the sensorless control in the high speed range. The management of the operation of induction machine without position or speed sensors awakes interest in the industrial world due to the increased reliability and robustness of this solution combined with a lower cost of production and purchase of this technology compared to the others available in the market. During this dissertation control techniques will be proposed which are able to exploit the total dc link voltage and at the same time capable to exploit the maximum torque capability in whole speed range with good dynamic performance. The proposed solution preserves the simplicity of tuning of the regulators. Furthermore, in order to validate the effectiveness of presented solution, it is assessed in terms of performance and complexity and compared to two other algorithm presented in literature. The feasibility of the proposed algorithm is also tested on induction motor drive fed by a matrix converter. Another important research area is connected to the development of technology for vehicular applications. In this field the dynamic performances and the low power consumption is one of most important goals for an effective algorithm. Towards this direction, a control scheme for induction motor that integrates within a coherent solution some of the features that are commonly required to an electric vehicle drive is presented. The main features of the proposed control scheme are the capability to exploit the maximum torque in the whole speed range, a weak dependence on the motor parameters, a good robustness against the variations of the dc-link voltage and, whenever possible, the maximum efficiency. The second part of this dissertation is dedicated to the multi-phase systems. This technology, in fact, is characterized by a number of issues worthy of investigation that make it competitive with other technologies already on the market. Multiphase systems, allow to redistribute power at a higher number of phases, thus making possible the construction of electronic converters which otherwise would be very difficult to achieve due to the limits of present power electronics. Multiphase drives have an intrinsic reliability given by the possibility that a fault of a phase, caused by the possible failure of a component of the converter, can be solved without inefficiency of the machine or application of a pulsating torque. The control of the magnetic field spatial harmonics in the air-gap with order higher than one allows to reduce torque noise and to obtain high torque density motor and multi-motor applications. In one of the next chapters a control scheme able to increase the motor torque by adding a third harmonic component to the air-gap magnetic field will be presented. Above the base speed the control system reduces the motor flux in such a way to ensure the maximum torque capability. The presented analysis considers the drive constrains and shows how these limits modify the motor performance. The multi-motor applications are described by a well-defined number of multiphase machines, having series connected stator windings, with an opportune permutation of the phases these machines can be independently controlled with a single multi-phase inverter. In this dissertation this solution will be presented and an electric drive consisting of two five-phase PM tubular actuators fed by a single five-phase inverter will be presented. Finally the modulation strategies for a multi-phase inverter will be illustrated. The problem of the space vector modulation of multiphase inverters with an odd number of phases is solved in different way. An algorithmic approach and a look-up table solution will be proposed. The inverter output voltage capability will be investigated, showing that the proposed modulation strategy is able to fully exploit the dc input voltage either in sinusoidal or non-sinusoidal operating conditions. All this aspects are considered in the next chapters. In particular, Chapter 1 summarizes the mathematical model of induction motor. The Chapter 2 is a brief state of art on three-phase inverter. Chapter 3 proposes a stator flux vector control for a three- phase induction machine and compares this solution with two other algorithms presented in literature. Furthermore, in the same chapter, a complete electric drive based on matrix converter is presented. In Chapter 4 a control strategy suitable for electric vehicles is illustrated. Chapter 5 describes the mathematical model of multi-phase induction machines whereas chapter 6 analyzes the multi-phase inverter and its modulation strategies. Chapter 7 discusses the minimization of the power losses in IGBT multi-phase inverters with carrier-based pulse width modulation. In Chapter 8 an extended stator flux vector control for a seven-phase induction motor is presented. Chapter 9 concerns the high torque density applications and in Chapter 10 different fault tolerant control strategies are analyzed. Finally, the last chapter presents a positioning multi-motor drive consisting of two PM tubular five-phase actuators fed by a single five-phase inverter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tissue engineering is a discipline that aims at regenerating damaged biological tissues by using a cell-construct engineered in vitro made of cells grown into a porous 3D scaffold. The role of the scaffold is to guide cell growth and differentiation by acting as a bioresorbable temporary substrate that will be eventually replaced by new tissue produced by cells. As a matter or fact, the obtainment of a successful engineered tissue requires a multidisciplinary approach that must integrate the basic principles of biology, engineering and material science. The present Ph.D. thesis aimed at developing and characterizing innovative polymeric bioresorbable scaffolds made of hydrolysable polyesters. The potentialities of both commercial polyesters (i.e. poly-e-caprolactone, polylactide and some lactide copolymers) and of non-commercial polyesters (i.e. poly-w-pentadecalactone and some of its copolymers) were explored and discussed. Two techniques were employed to fabricate scaffolds: supercritical carbon dioxide (scCO2) foaming and electrospinning (ES). The former is a powerful technology that enables to produce 3D microporous foams by avoiding the use of solvents that can be toxic to mammalian cells. The scCO2 process, which is commonly applied to amorphous polymers, was successfully modified to foam a highly crystalline poly(w-pentadecalactone-co-e-caprolactone) copolymer and the effect of process parameters on scaffold morphology and thermo-mechanical properties was investigated. In the course of the present research activity, sub-micrometric fibrous non-woven meshes were produced using ES technology. Electrospun materials are considered highly promising scaffolds because they resemble the 3D organization of native extra cellular matrix. A careful control of process parameters allowed to fabricate defect-free fibres with diameters ranging from hundreds of nanometers to several microns, having either smooth or porous surface. Moreover, versatility of ES technology enabled to produce electrospun scaffolds from different polyesters as well as “composite” non-woven meshes by concomitantly electrospinning different fibres in terms of both fibre morphology and polymer material. The 3D-architecture of the electrospun scaffolds fabricated in this research was controlled in terms of mutual fibre orientation by properly modifying the instrumental apparatus. This aspect is particularly interesting since the micro/nano-architecture of the scaffold is known to affect cell behaviour. Since last generation scaffolds are expected to induce specific cell response, the present research activity also explored the possibility to produce electrospun scaffolds bioactive towards cells. Bio-functionalized substrates were obtained by loading polymer fibres with growth factors (i.e. biomolecules that elicit specific cell behaviour) and it was demonstrated that, despite the high voltages applied during electrospinning, the growth factor retains its biological activity once released from the fibres upon contact with cell culture medium. A second fuctionalization approach aiming, at a final stage, at controlling cell adhesion on electrospun scaffolds, consisted in covering fibre surface with highly hydrophilic polymer brushes of glycerol monomethacrylate synthesized by Atom Transfer Radical Polymerization. Future investigations are going to exploit the hydroxyl groups of the polymer brushes for functionalizing the fibre surface with desired biomolecules. Electrospun scaffolds were employed in cell culture experiments performed in collaboration with biochemical laboratories aimed at evaluating the biocompatibility of new electrospun polymers and at investigating the effect of fibre orientation on cell behaviour. Moreover, at a preliminary stage, electrospun scaffolds were also cultured with tumour mammalian cells for developing in vitro tumour models aimed at better understanding the role of natural ECM on tumour malignity in vivo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The following Ph.D work was mainly focused on catalysis, as a key technology, to achieve the objectives of sustainable (green) chemistry. After introducing the concepts of sustainable (green) chemistry and an assessment of new sustainable chemical technologies, the relationship between catalysis and sustainable (green) chemistry was briefly discussed and illustrated via an analysis of some selected and relevant examples. Afterwards, as a continuation of the ongoing interest in Dr. Marco Bandini’s group on organometallic and organocatalytic processes, I addressed my efforts to the design and development of novel catalytic green methodologies for the synthesis of enantiomerically enriched molecules. In the first two projects the attention was focused on the employment of solid supports to carry out reactions that still remain a prerogative of omogeneous catalysis. Firstly, particular emphasis was addressed to the discovery of catalytic enantioselective variants of nitroaldol condensation (commonly termed Henry reaction), using a complex consisting in a polyethylene supported diamino thiopene (DATx) ligands and copper as active species. In the second project, a new class of electrochemically modified surfaces with DATx palladium complexes was presented. The DATx-graphite system proved to be efficient in promoting the Suzuki reaction. Moreover, in collaboration with Prof. Wolf at the University of British Columbia (Vancouver), cyclic voltammetry studies were reported. This study disclosed new opportunities for carbon–carbon forming processes by using heterogeneous, electrodeposited catalyst films. A straightforward metal-free catalysis allowed the exploration around the world of organocatalysis. In fact, three different and novel methodologies, using Cinchona, Guanidine and Phosphine derivatives, were envisioned in the three following projects. An interesting variant of nitroaldol condensation with simple trifluoromethyl ketones and also their application in a non-conventional activation of indolyl cores by Friedel-Crafts-functionalization, led to two novel synthetic protocols. These approaches allowed the preparation of synthetically useful trifluoromethyl derivatives bearing quaternary stereocenters. Lastly, in the sixth project the first γ-alkylation of allenoates with conjugated carbonyl compounds was envisioned. In the last part of this Ph.D thesis bases on an extra-ordinary collaboration with Prof. Balzani and Prof. Gigli, I was involved in the synthesis and characterization of a new type of heteroleptic cyclometaled-Ir(III) complexes, bearing bis-oxazolines (BOXs) as ancillary ligands. The new heteroleptic complexes were fully characterized and in order to examine the electroluminescent properties of FIrBOX(CH2), an Organic Light Emitting Device was realized.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The activity of the Ph.D. student Juri Luca De Coi involved the research field of policy languages and can be divided in three parts. The first part of the Ph.D. work investigated the state of the art in policy languages, ending up with: (i) identifying the requirements up-to-date policy languages have to fulfill; (ii) defining a policy language able to fulfill such requirements (namely, the Protune policy language); and (iii) implementing an infrastructure able to enforce policies expressed in the Protune policy language. The second part of the Ph.D. work focused on simplifying the activity of defining policies and ended up with: (i) identifying a subset of the controlled natural language ACE to express Protune policies; (ii) implementing a mapping between ACE policies and Protune policies; and (iii) adapting the ACE Editor to guide users step by step when defining ACE policies. The third part of the Ph.D. work tested the feasibility of the chosen approach by applying it to meaningful real-world problems, among which: (i) development of a security layer on top of RDF stores; and (ii) efficient policy-aware access to metadata stores. The research activity has been performed in tight collaboration with the Leibniz Universität Hannover and further European partners within the projects REWERSE, TENCompetence and OKKAM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Great strides have been made in the last few years in the pharmacological treatment of neuropsychiatric disorders, with the introduction into the therapy of several new and more efficient agents, which have improved the quality of life of many patients. Despite these advances, a large percentage of patients is still considered “non-responder” to the therapy, not drawing any benefits from it. Moreover, these patients have a peculiar therapeutic profile, due to the very frequent application of polypharmacy, attempting to obtain satisfactory remission of the multiple aspects of psychiatric syndromes. Therapy is heavily individualised and switching from one therapeutic agent to another is quite frequent. One of the main problems of this situation is the possibility of unwanted or unexpected pharmacological interactions, which can occur both during polypharmacy and during switching. Simultaneous administration of psychiatric drugs can easily lead to interactions if one of the administered compounds influences the metabolism of the others. Impaired CYP450 function due to inhibition of the enzyme is frequent. Other metabolic pathways, such as glucuronidation, can also be influenced. The Therapeutic Drug Monitoring (TDM) of psychotropic drugs is an important tool for treatment personalisation and optimisation. It deals with the determination of parent drugs and metabolites plasma levels, in order to monitor them over time and to compare these findings with clinical data. This allows establishing chemical-clinical correlations (such as those between administered dose and therapeutic and side effects), which are essential to obtain the maximum therapeutic efficacy, while minimising side and toxic effects. It is evident the importance of developing sensitive and selective analytical methods for the determination of the administered drugs and their main metabolites, in order to obtain reliable data that can correctly support clinical decisions. During the three years of Ph.D. program, some analytical methods based on HPLC have been developed, validated and successfully applied to the TDM of psychiatric patients undergoing treatment with drugs belonging to following classes: antipsychotics, antidepressants and anxiolytic-hypnotics. The biological matrices which have been processed were: blood, plasma, serum, saliva, urine, hair and rat brain. Among antipsychotics, both atypical and classical agents have been considered, such as haloperidol, chlorpromazine, clotiapine, loxapine, risperidone (and 9-hydroxyrisperidone), clozapine (as well as N-desmethylclozapine and clozapine N-oxide) and quetiapine. While the need for an accurate TDM of schizophrenic patients is being increasingly recognized by psychiatrists, only in the last few years the same attention is being paid to the TDM of depressed patients. This is leading to the acknowledgment that depression pharmacotherapy can greatly benefit from the accurate application of TDM. For this reason, the research activity has also been focused on first and second-generation antidepressant agents, like triciclic antidepressants, trazodone and m-chlorophenylpiperazine (m-cpp), paroxetine and its three main metabolites, venlafaxine and its active metabolite, and the most recent antidepressant introduced into the market, duloxetine. Among anxiolytics-hypnotics, benzodiazepines are very often involved in the pharmacotherapy of depression for the relief of anxious components; for this reason, it is useful to monitor these drugs, especially in cases of polypharmacy. The results obtained during these three years of Ph.D. program are reliable and the developed HPLC methods are suitable for the qualitative and quantitative determination of CNS drugs in biological fluids for TDM purposes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The subject of this Ph.D. research thesis is the development and application of multiplexed analytical methods based on bioluminescent whole-cell biosensors. One of the main goals of analytical chemistry is multianalyte testing in which two or more analytes are measured simultaneously in a single assay. The advantages of multianalyte testing are work simplification, high throughput, and reduction in the overall cost per test. The availability of multiplexed portable analytical systems is of particular interest for on-field analysis of clinical, environmental or food samples as well as for the drug discovery process. To allow highly sensitive and selective analysis, these devices should combine biospecific molecular recognition with ultrasensitive detection systems. To address the current need for rapid, highly sensitive and inexpensive devices for obtaining more data from each sample,genetically engineered whole-cell biosensors as biospecific recognition element were combined with ultrasensitive bioluminescence detection techniques. Genetically engineered cell-based sensing systems were obtained by introducing into bacterial, yeast or mammalian cells a vector expressing a reporter protein whose expression is controlled by regulatory proteins and promoter sequences. The regulatory protein is able to recognize the presence of the analyte (e.g., compounds with hormone-like activity, heavy metals…) and to consequently activate the expression of the reporter protein that can be readily measured and directly related to the analyte bioavailable concentration in the sample. Bioluminescence represents the ideal detection principle for miniaturized analytical devices and multiplexed assays thanks to high detectability in small sample volumes allowing an accurate signal localization and quantification. In the first chapter of this dissertation is discussed the obtainment of improved bioluminescent proteins emitting at different wavelenghts, in term of increased thermostability, enhanced emission decay kinetic and spectral resolution. The second chapter is mainly focused on the use of these proteins in the development of whole-cell based assay with improved analytical performance. In particular since the main drawback of whole-cell biosensors is the high variability of their analyte specific response mainly caused by variations in cell viability due to aspecific effects of the sample’s matrix, an additional bioluminescent reporter has been introduced to correct the analytical response thus increasing the robustness of the bioassays. The feasibility of using a combination of two or more bioluminescent proteins for obtaining biosensors with internal signal correction or for the simultaneous detection of multiple analytes has been demonstrated by developing a dual reporter yeast based biosensor for androgenic activity measurement and a triple reporter mammalian cell-based biosensor for the simultaneous monitoring of two CYP450 enzymes activation, involved in cholesterol degradation, with the use of two spectrally resolved intracellular luciferases and a secreted luciferase as a control for cells viability. In the third chapter is presented the development of a portable multianalyte detection system. In order to develop a portable system that can be used also outside the laboratory environment even by non skilled personnel, cells have been immobilized into a new biocompatible and transparent polymeric matrix within a modified clear bottom black 384 -well microtiter plate to obtain a bioluminescent cell array. The cell array was placed in contact with a portable charge-coupled device (CCD) light sensor able to localize and quantify the luminescent signal produced by different bioluminescent whole-cell biosensors. This multiplexed biosensing platform containing whole-cell biosensors was successfully used to measure the overall toxicity of a given sample as well as to obtain dose response curves for heavy metals and to detect hormonal activity in clinical samples (PCT/IB2010/050625: “Portable device based on immobilized cells for the detection of analytes.” Michelini E, Roda A, Dolci LS, Mezzanotte L, Cevenini L , 2010). At the end of the dissertation some future development steps are also discussed in order to develop a point of care (POCT) device that combine portability, minimum sample pre-treatment and highly sensitive multiplexed assays in a short assay time. In this POCT perspective, field-flow fractionation (FFF) techniques, in particular gravitational variant (GrFFF) that exploit the earth gravitational field to structure the separation, have been investigated for cells fractionation, characterization and isolation. Thanks to the simplicity of its equipment, amenable to miniaturization, the GrFFF techniques appears to be particularly suited for its implementation in POCT devices and may be used as pre-analytical integrated module to be applied directly to drive target analytes of raw samples to the modules where biospecifc recognition reactions based on ultrasensitive bioluminescence detection occurs, providing an increase in overall analytical output.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Synthetic biology is a young field of applicative research aiming to design and build up artificial biological devices, useful for human applications. How synthetic biology emerged in past years and how the development of the Registry of Standard Biological Parts aimed to introduce one practical starting solution to apply the basics of engineering to molecular biology is presented in chapter 1 in the thesis The same chapter recalls how biological parts can make up a genetic program, the molecular cloning tecnique useful for this purpose, and an overview of the mathematical modeling adopted to describe gene circuit behavior. Although the design of gene circuits has become feasible the increasing complexity of gene networks asks for a rational approach to design gene circuits. A bottom-up approach was proposed, suggesting that the behavior of a complicated system can be predicted from the features of its parts. The option to use modular parts in large-scale networks will be facilitated by a detailed and shared characterization of their functional properties. Such a prediction, requires well-characterized mathematical models of the parts and of how they behave when assembled together. In chapter 2, the feasibility of the bottom-up approach in the design of a synthetic program in Escherichia coli bacterial cells is described. The rational design of gene networks is however far from being established. The synthetic biology approach can used the mathematical formalism to identify biological information not assessable with experimental measurements. In this context, chapter 3 describes the design of a synthetic sensor for identifying molecules of interest inside eukaryotic cells. The Registry of Standard parts collects standard and modular biological parts. To spread the use of BioBricks the iGEM competition was started. The ICM Laboratory, where Francesca Ceroni completed her Ph.D, partecipated with teams of students and Chapter 4 summarizes the projects developed.