23 resultados para New Space Vector Modulation
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The ever-increasing spread of automation in industry puts the electrical engineer in a central role as a promoter of technological development in a sector such as the use of electricity, which is the basis of all the machinery and productive processes. Moreover the spread of drives for motor control and static converters with structures ever more complex, places the electrical engineer to face new challenges whose solution has as critical elements in the implementation of digital control techniques with the requirements of inexpensiveness and efficiency of the final product. The successfully application of solutions using non-conventional static converters awake an increasing interest in science and industry due to the promising opportunities. However, in the same time, new problems emerge whose solution is still under study and debate in the scientific community During the Ph.D. course several themes have been developed that, while obtaining the recent and growing interest of scientific community, have much space for the development of research activity and for industrial applications. The first area of research is related to the control of three phase induction motors with high dynamic performance and the sensorless control in the high speed range. The management of the operation of induction machine without position or speed sensors awakes interest in the industrial world due to the increased reliability and robustness of this solution combined with a lower cost of production and purchase of this technology compared to the others available in the market. During this dissertation control techniques will be proposed which are able to exploit the total dc link voltage and at the same time capable to exploit the maximum torque capability in whole speed range with good dynamic performance. The proposed solution preserves the simplicity of tuning of the regulators. Furthermore, in order to validate the effectiveness of presented solution, it is assessed in terms of performance and complexity and compared to two other algorithm presented in literature. The feasibility of the proposed algorithm is also tested on induction motor drive fed by a matrix converter. Another important research area is connected to the development of technology for vehicular applications. In this field the dynamic performances and the low power consumption is one of most important goals for an effective algorithm. Towards this direction, a control scheme for induction motor that integrates within a coherent solution some of the features that are commonly required to an electric vehicle drive is presented. The main features of the proposed control scheme are the capability to exploit the maximum torque in the whole speed range, a weak dependence on the motor parameters, a good robustness against the variations of the dc-link voltage and, whenever possible, the maximum efficiency. The second part of this dissertation is dedicated to the multi-phase systems. This technology, in fact, is characterized by a number of issues worthy of investigation that make it competitive with other technologies already on the market. Multiphase systems, allow to redistribute power at a higher number of phases, thus making possible the construction of electronic converters which otherwise would be very difficult to achieve due to the limits of present power electronics. Multiphase drives have an intrinsic reliability given by the possibility that a fault of a phase, caused by the possible failure of a component of the converter, can be solved without inefficiency of the machine or application of a pulsating torque. The control of the magnetic field spatial harmonics in the air-gap with order higher than one allows to reduce torque noise and to obtain high torque density motor and multi-motor applications. In one of the next chapters a control scheme able to increase the motor torque by adding a third harmonic component to the air-gap magnetic field will be presented. Above the base speed the control system reduces the motor flux in such a way to ensure the maximum torque capability. The presented analysis considers the drive constrains and shows how these limits modify the motor performance. The multi-motor applications are described by a well-defined number of multiphase machines, having series connected stator windings, with an opportune permutation of the phases these machines can be independently controlled with a single multi-phase inverter. In this dissertation this solution will be presented and an electric drive consisting of two five-phase PM tubular actuators fed by a single five-phase inverter will be presented. Finally the modulation strategies for a multi-phase inverter will be illustrated. The problem of the space vector modulation of multiphase inverters with an odd number of phases is solved in different way. An algorithmic approach and a look-up table solution will be proposed. The inverter output voltage capability will be investigated, showing that the proposed modulation strategy is able to fully exploit the dc input voltage either in sinusoidal or non-sinusoidal operating conditions. All this aspects are considered in the next chapters. In particular, Chapter 1 summarizes the mathematical model of induction motor. The Chapter 2 is a brief state of art on three-phase inverter. Chapter 3 proposes a stator flux vector control for a three- phase induction machine and compares this solution with two other algorithms presented in literature. Furthermore, in the same chapter, a complete electric drive based on matrix converter is presented. In Chapter 4 a control strategy suitable for electric vehicles is illustrated. Chapter 5 describes the mathematical model of multi-phase induction machines whereas chapter 6 analyzes the multi-phase inverter and its modulation strategies. Chapter 7 discusses the minimization of the power losses in IGBT multi-phase inverters with carrier-based pulse width modulation. In Chapter 8 an extended stator flux vector control for a seven-phase induction motor is presented. Chapter 9 concerns the high torque density applications and in Chapter 10 different fault tolerant control strategies are analyzed. Finally, the last chapter presents a positioning multi-motor drive consisting of two PM tubular five-phase actuators fed by a single five-phase inverter.
Resumo:
The present dissertation aims to explore, theoretically and experimentally, the problems and the potential advantages of different types of power converters for “Smart Grid” applications, with particular emphasis on multi-level architectures, which are attracting a rising interest even for industrial requests. The models of the main multilevel architectures (Diode-Clamped and Cascaded) are shown. The best suited modulation strategies to function as a network interface are identified. In particular, the close correlation between PWM (Pulse Width Modulation) approach and SVM (Space Vector Modulation) approach is highlighted. An innovative multilevel topology called MMC (Modular Multilevel Converter) is investigated, and the single-phase, three-phase and "back to back" configurations are analyzed. Specific control techniques that can manage, in an appropriate way, the charge level of the numerous capacitors and handle the power flow in a flexible way are defined and experimentally validated. Another converter that is attracting interest in “Power Conditioning Systems” field is the “Matrix Converter”. Even in this architecture, the output voltage is multilevel. It offers an high quality input current, a bidirectional power flow and has the possibility to control the input power factor (i.e. possibility to participate to active and reactive power regulations). The implemented control system, that allows fast data acquisition for diagnostic purposes, is described and experimentally verified.
Resumo:
Research work carried out in focusing a novel multiphase-multilevel ac motor drive system much suitable for low-voltage high-current power applications. In specific, six-phase asymmetrical induction motor with open-end stator winding configuration, fed from four standard two-level three-phase voltage source inverters (VSIs). Proposed synchronous reference frame control algorithm shares the total dc source power among the 4 VSIs in each switching cycle with three degree of freedom. Precisely, first degree of freedom concerns with the current sharing between two three-phase stator windings. Based on modified multilevel space vector pulse width modulation shares the voltage between each single VSIs of two three-phase stator windings with second and third degree of freedom, having proper multilevel output waveforms. Complete model of whole ac motor drive based on three-phase space vector decomposition approach was developed in PLECS - numerical simulation software working in MATLAB environment. Proposed synchronous reference control algorithm was framed in MATLAB with modified multilevel space vector pulse width modulator. The effectiveness of the entire ac motor drives system was tested. Simulation results are given in detail to show symmetrical and asymmetrical, power sharing conditions. Furthermore, the three degree of freedom are exploited to investigate fault tolerant capabilities in post-fault conditions. Complete set of simulation results are provided when one, two and three VSIs are faulty. Hardware prototype model of quad-inverter was implemented with two passive three-phase open-winding loads using two TMS320F2812 DSP controllers. Developed McBSP (multi-channel buffered serial port) communication algorithm able to control the four VSIs for PWM communication and synchronization. Open-loop control scheme based on inverse three-phase decomposition approach was developed to control entire quad-inverter configuration and tested with balanced and unbalanced operating conditions with simplified PWM techniques. Both simulation and experimental results are always in good agreement with theoretical developments.
Resumo:
Analysis of the peak-to-peak output current ripple amplitude for multiphase and multilevel inverters is presented in this PhD thesis. The current ripple is calculated on the basis of the alternating voltage component, and peak-to-peak value is defined by the current slopes and application times of the voltage levels in a switching period. Detailed analytical expressions of peak-to-peak current ripple distribution over a fundamental period are given as function of the modulation index. For all the cases, reference is made to centered and symmetrical switching patterns, generated either by carrier-based or space vector PWM. Starting from the definition and the analysis of the output current ripple in three-phase two-level inverters, the theoretical developments have been extended to the case of multiphase inverters, with emphasis on the five- and seven-phase inverters. The instantaneous current ripple is introduced for a generic balanced multiphase loads consisting of series RL impedance and ac back emf (RLE). Simplified and effective expressions to account for the maximum of the output current ripple have been defined. The peak-to-peak current ripple diagrams are presented and discussed. The analysis of the output current ripple has been extended also to multilevel inverters, specifically three-phase three-level inverters. Also in this case, the current ripple analysis is carried out for a balanced three-phase system consisting of series RL impedance and ac back emf (RLE), representing both motor loads and grid-connected applications. The peak-to-peak current ripple diagrams are presented and discussed. In addition, simulation and experimental results are carried out to prove the validity of the analytical developments in all the cases. The cases with different phase numbers and with different number of levels are compared among them, and some useful conclusions have been pointed out. Furthermore, some application examples are given.
Resumo:
This thesis explores the function of the theatre in Derek Walcott's literary achievements. Focusing on the semiotic theories that characterize the study of drama as a literary text and as a staged text, the initial approach aims at creating a relationship between semiotics and postcolonial theories. In particular Pavis's concept of intercultural semiotics and Peter Brook's innovative visions about the regenerative function of the space of the theatre represent a useful theoretical basis to consider the specificity of postcolonial theatre as an innovative space, where new cultural meanings emerge. Derek Walcott's dramatic production is studied according to this approach, in order to be defined as a new hybrid, syncretic and multicultural space. After considering the development of drama from a postcolonial and Caribbean perspective, this study begins with an insight into Walcott's views on theatre, taking into consideration his linguistic depth, linked to the European tradition, but also his strong concern with the Caribbean public's cultural needs. The double tension characterizing Walcott's cultural identity as well as his art represents an essential element to analyse his dramatic texts. With an ambivalent approach, which takes into consideration language and performance, this thesis offers an insight into Walcott's plays to detect their postcolonial and multicultural elements. The analysis of the different texts are divided into two chapters (third and fourth). The third chapters - mainly focused on postcolonial themes - explores issues such as language, identity and space, whereas the fourth chapter centers on multiculturalism in text and performance. Dealing with interracial interactions, issues like re-writing classical texts and the manipulation of personal and collective memory as a way to re- establish new historical perspectives, the last part of the thesis aims at demonstrating the idea that Walcott has created a new space in the theatre made by the harmonic fusion of different and opposed cultural elements, which are visible in the literary as well as in the staged text. The textual perspective of Walcott's drama fits into Pavis's definition of intercultural semiotics, as the faithful representation of a multicultural creole society: that of the West Indies.
Resumo:
Le scelte di asset allocation costituiscono un problema ricorrente per ogni investitore. Quest’ultimo è continuamente impegnato a combinare diverse asset class per giungere ad un investimento coerente con le proprie preferenze. L’esigenza di supportare gli asset manager nello svolgimento delle proprie mansioni ha alimentato nel tempo una vasta letteratura che ha proposto numerose strategie e modelli di portfolio construction. Questa tesi tenta di fornire una rassegna di alcuni modelli innovativi di previsione e di alcune strategie nell’ambito dell’asset allocation tattica, per poi valutarne i risvolti pratici. In primis verificheremo la sussistenza di eventuali relazioni tra la dinamica di alcune variabili macroeconomiche ed i mercati finanziari. Lo scopo è quello di individuare un modello econometrico capace di orientare le strategie dei gestori nella costruzione dei propri portafogli di investimento. L’analisi prende in considerazione il mercato americano, durante un periodo caratterizzato da rapide trasformazioni economiche e da un’elevata volatilità dei prezzi azionari. In secondo luogo verrà esaminata la validità delle strategie di trading momentum e contrarian nei mercati futures, in particolare quelli dell’Eurozona, che ben si prestano all’implementazione delle stesse, grazie all’assenza di vincoli sulle operazioni di shorting ed ai ridotti costi di transazione. Dall’indagine emerge che entrambe le anomalie si presentano con carattere di stabilità. I rendimenti anomali permangono anche qualora vengano utilizzati i tradizionali modelli di asset pricing, quali il CAPM, il modello di Fama e French e quello di Carhart. Infine, utilizzando l’approccio EGARCH-M, verranno formulate previsioni sulla volatilità dei rendimenti dei titoli appartenenti al Dow Jones. Quest’ultime saranno poi utilizzate come input per determinare le views da inserire nel modello di Black e Litterman. I risultati ottenuti, evidenziano, per diversi valori dello scalare tau, extra rendimenti medi del new combined vector superiori al vettore degli extra rendimenti di equilibrio di mercato, seppur con livelli più elevati di rischio.
Resumo:
Multi-phase electrical drives are potential candidates for the employment in innovative electric vehicle powertrains, in response to the request for high efficiency and reliability of this type of application. In addition to the multi-phase technology, in the last decades also, multilevel technology has been developed. These two technologies are somewhat complementary since both allow increasing the power rating of the system without increasing the current and voltage ratings of the single power switches of the inverter. In this thesis, some different topics concerning the inverter, the motor and the fault diagnosis of an electric vehicle powertrain are addressed. In particular, the attention is focused on multi-phase and multilevel technologies and their potential advantages with respect to traditional technologies. First of all, the mathematical models of two multi-phase machines, a five-phase induction machine and an asymmetrical six-phase permanent magnet synchronous machines are developed using the Vector Space Decomposition approach. Then, a new modulation technique for multi-phase multilevel T-type inverters, which solves the voltage balancing problem of the DC-link capacitors, ensuring flexible management of the capacitor voltages, is developed. The technique is based on the proper selection of the zero-sequence component of the modulating signals. Subsequently, a diagnostic technique for detecting the state of health of the rotor magnets in a six-phase permanent magnet synchronous machine is established. The technique is based on analysing the electromotive force induced in the stator windings by the rotor magnets. Furthermore, an innovative algorithm able to extend the linear modulation region for five-phase inverters, taking advantage of the multiple degrees of freedom available in multi-phase systems is presented. Finally, the mathematical model of an eighteen-phase squirrel cage induction motor is defined. This activity aims to develop a motor drive able to change the number of poles of the machine during the machine operation.
Resumo:
In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.
Resumo:
Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.
Resumo:
Alzheimer's disease (AD) and cancer represent two of the main causes of death worldwide. They are complex multifactorial diseases and several biochemical targets have been recognized to play a fundamental role in their development. Basing on their complex nature, a promising therapeutical approach could be represented by the so-called "Multi-Target-Directed Ligand" approach. This new strategy is based on the assumption that a single molecule could hit several targets responsible for the onset and/or progression of the pathology. In particular in AD, most currently prescribed drugs aim to increase the level of acetylcholine in the brain by inhibiting the enzyme acetylcholinesterase (AChE). However, clinical experience shows that AChE inhibition is a palliative treatment, and the simple modulation of a single target does not address AD aetiology. Research into newer and more potent anti-AD agents is thus focused on compounds whose properties go beyond AChE inhibition (such as inhibition of the enzyme β-secretase and inhibition of the aggregation of beta-amyloid). Therefore, the MTDL strategy seems a more appropriate approach for addressing the complexity of AD and may provide new drugs for tackling its multifactorial nature. In this thesis, it is described the design of new MTDLs able to tackle the multifactorial nature of AD. Such new MTDLs designed are less flexible analogues of Caproctamine, one of the first MTDL owing biological properties useful for the AD treatment. These new compounds are able to inhibit the enzymes AChE, beta-secretase and to inhibit both AChE-induced and self-induced beta-amyloid aggregation. In particular, the most potent compound of the series is able to inhibit AChE in subnanomolar range, to inhibit β-secretase in micromolar concentration and to inhibit both AChE-induced and self-induced beta-amyloid aggregation in micromolar concentration. Cancer, as AD, is a very complex pathology and many different therapeutical approaches are currently use for the treatment of such pathology. However, due to its multifactorial nature the MTDL approach could be, in principle, apply also to this pathology. Aim of this thesis has been the development of new molecules owing different structural motifs able to simultaneously interact with some of the multitude of targets responsible for the pathology. The designed compounds displayed cytotoxic activity in different cancer cell lines. In particular, the most potent compounds of the series have been further evaluated and they were able to bind DNA resulting 100-fold more potent than the reference compound Mitonafide. Furthermore, these compounds were able to trigger apoptosis through caspases activation and to inhibit PIN1 (preliminary result). This last protein is a very promising target because it is overexpressed in many human cancers, it functions as critical catalyst for multiple oncogenic pathways and in several cancer cell lines depletion of PIN1 determines arrest of mitosis followed by apoptosis induction. In conclusion, this study may represent a promising starting pint for the development of new MTDLs hopefully useful for cancer and AD treatment.
Resumo:
An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.
Resumo:
The use of agents targeting EGFR represents a new frontier in colon cancer therapy. Among these, monoclonal antibodies (mAbs) and EGFR tyrosine kinase inhibitors (TKIs) seemed to be the most promising. However they have demonstrated low utility in therapy, the former being effective at toxic doses, the latter resulting inefficient in colon cancer. This thesis work presents studies on a new EGFR inhibitor, FR18, a molecule containing the same naphtoquinone core as shikonin, an agent with great anti-tumor potential. In HT-29, a human colon carcinoma cell line, flow cytometry, immunoprecipitation, and Western blot analysis, confocal spectral microscopy have demonstrated that FR18 is active at concentrations as low as 10 nM, inhibits EGF binding to EGFR while leaving unperturbed the receptor kinase activity. At concentration ranging from 30 nM to 5 μM, it activates apoptosis. FR18 seems therefore to have possible therapeutic applications in colon cancer. In addition, surface plasmon resonance (SPR) investigation of the direct EGF/EGFR complex interaction using different experimental approaches is presented. A commercially available purified EGFR was immobilised by amine coupling chemistry on SPR sensor chip and its interaction to EGF resulted to have a KD = 368 ± 0.65 nM. SPR technology allows the study of biomolecular interactions in real-time and label-free with a high degree of sensitivity and specificity and thus represents an important tool for drug discovery studies. On the other hand EGF/EGFR complex interaction represents a challenging but important system that can lead to significant general knowledge about receptor-ligand interactions, and the design of new drugs intended to interfere with EGFR binding activity.
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
Resumo:
Reaching and grasping an object is an action that can be performed in light, under visual guidance, as well as in darkness, under proprioceptive control only. Area V6A is a visuomotor area involved in the control of reaching movements. V6A, besides neurons activated by the execution of reaching movements, shows passive somatosensory and visual responses. This suggests fro V6A a multimodal capability of integrating sensory and motor-related information, We wanted to know whether this integration occurrs in reaching movements and in the present study we tested whether the visual feedback influenced the reaching activity of V6A neurons. In order to better address this question, we wanted to interpret the neural data in the light of the kinematic of reaching performance. We used an experimental paradigm that could examine V6A responses in two different visual backgrounds, light and dark. In these conditions, the monkey performed an istructed-delay reaching task moving the hand towards different target positions located in the peripersonal space. During the execution of reaching task, the visual feedback is processed in a variety of patterns of modulation, sometimes not expected. In fact, having already demonstrated in V6A reach-related discharges in absence of visual feedback, we expected two types of neural modulation: 1) the addition of light in the environment enhanced reach-related discharges recorded in the dark; 2) the light left the neural response unmodified. Unexpectedly, the results show a complex pattern of modulation that argues against a simple additive interaction between visual and motor-related signals.
Resumo:
In the post genomic era with the massive production of biological data the understanding of factors affecting protein stability is one of the most important and challenging tasks for highlighting the role of mutations in relation to human maladies. The problem is at the basis of what is referred to as molecular medicine with the underlying idea that pathologies can be detailed at a molecular level. To this purpose scientific efforts focus on characterising mutations that hamper protein functions and by these affect biological processes at the basis of cell physiology. New techniques have been developed with the aim of detailing single nucleotide polymorphisms (SNPs) at large in all the human chromosomes and by this information in specific databases are exponentially increasing. Eventually mutations that can be found at the DNA level, when occurring in transcribed regions may then lead to mutated proteins and this can be a serious medical problem, largely affecting the phenotype. Bioinformatics tools are urgently needed to cope with the flood of genomic data stored in database and in order to analyse the role of SNPs at the protein level. In principle several experimental and theoretical observations are suggesting that protein stability in the solvent-protein space is responsible of the correct protein functioning. Then mutations that are found disease related during DNA analysis are often assumed to perturb protein stability as well. However so far no extensive analysis at the proteome level has investigated whether this is the case. Also computationally methods have been developed to infer whether a mutation is disease related and independently whether it affects protein stability. Therefore whether the perturbation of protein stability is related to what it is routinely referred to as a disease is still a big question mark. In this work we have tried for the first time to explore the relation among mutations at the protein level and their relevance to diseases with a large-scale computational study of the data from different databases. To this aim in the first part of the thesis for each mutation type we have derived two probabilistic indices (for 141 out of 150 possible SNPs): the perturbing index (Pp), which indicates the probability that a given mutation effects protein stability considering all the “in vitro” thermodynamic data available and the disease index (Pd), which indicates the probability of a mutation to be disease related, given all the mutations that have been clinically associated so far. We find with a robust statistics that the two indexes correlate with the exception of all the mutations that are somatic cancer related. By this each mutation of the 150 can be coded by two values that allow a direct comparison with data base information. Furthermore we also implement computational methods that starting from the protein structure is suited to predict the effect of a mutation on protein stability and find that overpasses a set of other predictors performing the same task. The predictor is based on support vector machines and takes as input protein tertiary structures. We show that the predicted data well correlate with the data from the databases. All our efforts therefore add to the SNP annotation process and more importantly found the relationship among protein stability perturbation and the human variome leading to the diseasome.