995 resultados para Current signal


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis work aims to find a procedure for isolating specific features of the current signal from a plasma focus for medical applications. The structure of the current signal inside a plasma focus is exclusive of this class of machines and a specific analysis procedure has to be developed. The hope is to find one or more features that shows a correlation with the dose erogated. The study of the correlation between the current discharge signal and the dose delivered by a plasma focus could be of some importance not only for the practical application of dose prediction but also for expanding the knowledge anbout the plasma focus physics. Vatious classes of time-frequency analysis tecniques are implemented in order to solve the problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents the results of an investigation of processes in the melting zone during Electron Beam Welding(EBW) through analysis of the secondary current in the plasma.The studies show that the spectrum of the secondary emission signal during steel welding has a pronounced periodic component at a frequency of around 15–25 kHz. The signal contains quasi-periodic sharp peaks (impulses). These impulses have stochastically varying amplitude and follow each other inseries, at random intervals between series. The impulses have a considerable current (up to 0.5 A). It was established that during electron-beam welding with the focal spot scanning these impulses follow each other almost periodically. It was shown that the probability of occurrence of these high-frequency perturbation increases with the concentration of energy in the interaction zone. The paper also presents hypotheses for the mechanism of the formation of the high-frequency oscillations in the secondary current signal in the plasma.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The performance of a device based on modified injection-locking techniques is studied by means of numerical simulations. The device incorporates master and slave configurations, each one with a DFB laser and an electroabsortion modulator (EAM). This arrangement allows the generation of high peak power, narrow optical pulses according to a periodic or pseudorandom bit stream provided by a current signal generator. The device is able to considerably increase the modulation bandwidth of free-running gain-switched semiconductor lasers using multiplexing in the time domain. Opportunities for integration in small packages or single chips are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tutkimus käynnistyi Maanpuolustuskorkeakoulun taktiikan laitoksen esittäessä aihepiiriä tutkittavaksi. Tutkimuksen tavoitteena on ollut lisätä tietämystä viestitaktiikan kehittymisestä yhtymän viestijärjestelmän käyttöönoton jälkeen 1980 – 2000-luvuilla osana operatiivistaktisten toimintaperiaatteiden ja -tapojen kehittymistä. Tutkimuksella on pyritty syventämään tietämystä taktisten periaatteiden muutoksista viestitaktiikan näkökulmasta. Tutkimuksessa tarkasteltiin maavoimien YVI-järjestelmillä varustettujen yhtymien viestitaktiikkaa sekä niissä tapahtuneita muutoksia. Muutoksia tarkasteltaessa tutkimuksessa keskityttiin käsitykseen viestitaktiikasta, viestitaktisiin periaatteisiin sekä viestipäällikköön ja hänen toimintakenttäänsä. Viestitaktisia periaatteita ja niissä tapahtuneita muutoksia vertailtiin myös yleisiin taktisiin periaatteisiin ja niiden painotuksissa tapahtuneisiin muutoksiin. Tutkimus on luonteeltaan kvalitatiivinen. Tutkimusongelmia lähestyttiin fenomenografisella tutkimusotteella, jossa tavoitteena on kuvailla, analysoida ja ymmärtää erilaisia käsityksiä ilmiöistä sekä käsitysten keskinäisistä suhteista. Lähdeaineiston muodostivat 18 viestitaktiikan asiantuntijan kokemusperäiset käsitykset viestitaktiikasta ja sen kehittymisestä YVIjärjestelmien käyttöönoton jälkeen. Käsityksistä muodostettiin merkitys- ja kuvauskategorioiden sekä tutkijan esiymmärryksen pohjalta induktiivisen päättelyn avulla tutkimuksen varsinaiset johtopäätökset. Tutkimushenkilöiden käsitysten sekä taktiikan ja viestitaktiikan aikaisempien määritelmien perusteella johtopäätöksenä määritettiin, että viestitaktiikka on tehtävän toteuttamiseen käytettävissä olevan viestillisen kapasiteetin optimaalista suunnittelua, soveltamista ja käyttöä viestivoimana haluttujen päämäärien saavuttamiseksi ja viestitaisteluiden voittamiseksi. Viestitaktikointi edellyttää viestitaisteluun liittyvien keinojen tuntemista sekä taitoa soveltaa niitä käytännössä. Tutkimustulosten perusteella keskeisiksi viestitaktisiksi periaatteiksi tärkeysjärjestyksessä muodostuivat - päämäärän ja tehtävän selkeys - varautuminen odottamattomiin tilanteen vaihteluihin - yksinkertaisuus - aktiivisuus ja oma-aloitteisuus. Keskeisiksi merkitystään lisänneiksi viestitaktisiksi periaatteiksi muodostuivat - voimien vaikutuksen keskittäminen - joukkojen ja voimien jakaminen (reservi) - varautuminen odottamattomiin tilanteen vaihteluihin - salaaminen ja harhauttaminen - turvallisuus. Selkeimpänä viestipäällikön tehtävien muutoksena pidettiin siirtymistä yksityiskohtaisesta viestiyhteyksien suunnittelijasta kokonaisvaltaiseksi yhtymän viestitoiminnan johtajaksi. Tutkimustulosten ja aikaisempien määritelmien perusteella johtopäätöksenä määritettiin, että viestipäällikkö johtaa yhtymän viestitoimintaa komentajan antamien vaatimusten mukaisesti ja vastaa yhtymän johtoryhmän jäsenenä viestitaktisista ratkaisuista haluttujen päämäärien saavuttamiseksi ja viestitaisteluiden voittamiseksi. Viestipäälliköltä edellytetään viestitaisteluun liittyvien keinojen tuntemista sekä taitoa soveltaa niitä käytännössä. Tutkimuksen mukaan yhtymän viestitaktiikkaan merkittävimmin vaikuttaneita tekijöitä olivat yhtymän viestijärjestelmien käyttöönotto, uusien esikunta- ja viestiyksiköiden kehittäminen, kiinteän viestiverkon ja johtamisjärjestelmäalan merkityksen kasvaminen, käytettävien tekniikoiden kehittyminen sekä joukkojen ja johtoportaiden tiedonsiirtotarpeiden kasvaminen. Viestitaktiikan osalta voidaan todeta deterministisen näkemyksen taistelusta ja taistelutilasta muuttuneen yleisten taktisten periaatteiden muutosten mukaisesti aikaisempaa monimuotoisempaan ja rohkeampaan, voluntaarisempaan, suuntaan.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traditional mathematical tools, like Fourier Analysis, have proven to be efficient when analyzing steady-state distortions; however, the growing utilization of electronically controlled loads and the generation of a new dynamics in industrial environments signals have suggested the need of a powerful tool to perform the analysis of non-stationary distortions, overcoming limitations of frequency techniques. Wavelet Theory provides a new approach to harmonic analysis, focusing the decomposition of a signal into non-sinusoidal components, which are translated and scaled in time, generating a time-frequency basis. The correct choice of the waveshape to be used in decomposition is very important and discussed in this work. A brief theoretical introduction on Wavelet Transform is presented and some cases (practical and simulated) are discussed. Distortions commonly found in industrial environments, such as the current waveform of a Switched-Mode Power Supply and the input phase voltage waveform of motor fed by inverter are analyzed using Wavelet Theory. Applications such as extracting the fundamental frequency of a non-sinusoidal current signal, or using the ability of compact representation to detect non-repetitive disturbances are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Transformer protection is one of the most challenging applications within the power system protective relay field. Transformers with a capacity rating exceeding 10 MVA are usually protected using differential current relays. Transformers are an aging and vulnerable bottleneck in the present power grid; therefore, quick fault detection and corresponding transformer de-energization is the key element in minimizing transformer damage. Present differential current relays are based on digital signal processing (DSP). They combine DSP phasor estimation and protective-logic-based decision making. The limitations of existing DSP-based differential current relays must be identified to determine the best protection options for sensitive and quick fault detection. The development, implementation, and evaluation of a DSP differential current relay is detailed. The overall goal is to make fault detection faster without compromising secure and safe transformer operation. A detailed background on the DSP differential current relay is provided. Then different DSP phasor estimation filters are implemented and evaluated based on their ability to extract desired frequency components from the measured current signal quickly and accurately. The main focus of the phasor estimation evaluation is to identify the difference between using non-recursive and recursive filtering methods. Then the protective logic of the DSP differential current relay is implemented and required settings made in accordance with transformer application. Finally, the DSP differential current relay will be evaluated using available transformer models within the ATP simulation environment. Recursive filtering methods were found to have significant advantage over non-recursive filtering methods when evaluated individually and when applied in the DSP differential relay. Recursive filtering methods can be up to 50% faster than non-recursive methods, but can cause false trip due to overshoot if the only objective is speed. The relay sensitivity is however independent of filtering method and depends on the settings of the relay’s differential characteristics (pickup threshold and percent slope).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: The rapid distal falloff of a proton beam allows for sparing of normal tissues distal to the target. However proton beams that aim directly towards critical structures are avoided due to concerns of range uncertainties, such as CT number conversion and anatomy variations. We propose to eliminate range uncertainty and enable prostate treatment with a single anterior beam by detecting the proton’s range at the prostate-rectal interface and adaptively adjusting the range in vivo and in real-time. Materials and Methods: A prototype device, consisting of an endorectal liquid scintillation detector and dual-inverted Lucite wedges for range compensation, was designed to test the feasibility and accuracy of the technique. Liquid scintillation filled volume was fitted with optical fiber and placed inside the rectum of an anthropomorphic pelvic phantom. Photodiode-generated current signal was generated as a function of proton beam distal depth, and the spatial resolution of this technique was calculated by relating the variance in detecting proton spills to its maximum penetration depth. The relative water-equivalent thickness of the wedges was measured in a water phantom and prospectively tested to determine the accuracy of range corrections. Treatment simulation studies were performed to test the potential dosimetric benefit in sparing the rectum. Results: The spatial resolution of the detector in phantom measurement was 0.5 mm. The precision of the range correction was 0.04 mm. The residual margin to ensure CTV coverage was 1.1 mm. The composite distal margin for 95% treatment confidence was 2.4 mm. Planning studies based on a previously estimated 2mm margin (90% treatment confidence) for 27 patients showed a rectal sparing up to 51% at 70 Gy and 57% at 40 Gy relative to IMRT and bilateral proton treatment. Conclusion: We demonstrated the feasibility of our design. Use of this technique allows for proton treatment using a single anterior beam, significantly reducing the rectal dose.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os motores de indução desempenham um importante papel na indústria, fato este que destaca a importância do correto diagnóstico e classificação de falhas ainda em fase inicial de sua evolução, possibilitando aumento na produtividade e, principalmente, eliminando graves danos aos processos e às máquinas. Assim, a proposta desta tese consiste em apresentar um multiclassificador inteligente para o diagnóstico de motor sem defeitos, falhas de curto-circuito nos enrolamentos do estator, falhas de rotor e falhas de rolamentos em motores de indução trifásicos acionados por diferentes modelos de inversores de frequência por meio da análise das amplitudes dos sinais de corrente de estator no domínio do tempo. Para avaliar a precisão de classificação frente aos diversos níveis de severidade das falhas, foram comparados os desempenhos de quatro técnicas distintas de aprendizado de máquina; a saber: (i) Rede Fuzzy Artmap, (ii) Rede Perceptron Multicamadas, (iii) Máquina de Vetores de Suporte e (iv) k-Vizinhos-Próximos. Resultados experimentais obtidos a partir de 13.574 ensaios experimentais são apresentados para validar o estudo considerando uma ampla faixa de frequências de operação, bem como regimes de conjugado de carga em 5 motores diferentes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The scope of this paper is to present the Pulse Width Modulation (PWM) based method for Active Power (AP) and Reactive Power (RP) measurements as can be applied in Power Meters. Necessarily, the main aim of the material presented is a twofold, first to present a realization methodology of the proposed algorithm, and second to verify the algorithm’s robustness and validity. The method takes advantage of the fact that frequencies present in a power line are of a specific fundamental frequency range (a range centred on the 50 Hz or 60 Hz) and that in case of the presence of harmonics the frequencies of those dominating in the power line spectrum can be specified on the basis of the fundamental. In contrast to a number of existing methods a time delay or shifting of the input signal is not required by the method presented and the time delay by n/2 of the Current signal with respect to the Voltage signal required by many of the existing measurement techniques, does not apply in the case of the PWM method as well.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This PhD thesis reports the main activities carried out during the 3 years long “Mechanics and advanced engineering sciences” course, at the Department of Industrial Engineering of the University of Bologna. The research project title is “Development and analysis of high efficiency combustion systems for internal combustion engines” and the main topic is knock, one of the main challenges for boosted gasoline engines. Through experimental campaigns, modelling activity and test bench validation, 4 different aspects have been addressed to tackle the issue. The main path goes towards the definition and calibration of a knock-induced damage model, to be implemented in the on-board control strategy, but also usable for the engine calibration and potentially during the engine design. Ionization current signal capabilities have been investigated to fully replace the pressure sensor, to develop a robust on-board close-loop combustion control strategy, both in knock-free and knock-limited conditions. Water injection is a powerful solution to mitigate knock intensity and exhaust temperature, improving fuel consumption; its capabilities have been modelled and validated at the test bench. Finally, an empiric model is proposed to predict the engine knock response, depending on several operating condition and control parameters, including injected water quantity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a new approach to develop Field Programmable Analog Arrays (FPAAs),(1) which avoids excessive number of programming elements in the signal path, thus enhancing the performance. The paper also introduces a novel FPAA architecture, devoid of the conventional switching and connection modules. The proposed FPAA is based on simple current mode sub-circuits. An uncompounded methodology has been employed for the programming of the Configurable Analog Cell (CAC). Current mode approach has enabled the operation of the FPAA presented here, over almost three decades of frequency range. We have demonstrated the feasibility of the FPAA by implementing some signal processing functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiac arrhythmias are one of the main causes of death worldwide. Several studies have shown that inflammation plays a key role in different cardiac diseases and Toll-like receptors (TLRs) seem to be involved in cardiac complications. In the present study, we investigated whether the activation of TLR4 induces cardiac electrical remodeling and arrhythmias, and the signaling pathway involved in these effects. Membrane potential was recorded in Wistar rat ventricle. Ca(2+) transients, as well as the L-type Ca(2+) current (ICaL) and the transient outward K(+) current (Ito), were recorded in isolated myocytes after 24 h exposure to the TLR4 agonist, lipopolysaccharide (LPS, 1 μg/ml). TLR4 stimulation in vitro promoted a cardiac electrical remodeling that leads to action potential prolongation associated with arrhythmic events, such as delayed afterdepolarization and triggered activity. After 24 h LPS incubation, Ito amplitude, as well as Kv4.3 and KChIP2 mRNA levels were reduced. The Ito decrease by LPS was prevented by inhibition of interferon regulatory factor 3 (IRF3), but not by inhibition of interleukin-1 receptor-associated kinase 4 (IRAK4) or nuclear factor kappa B (NF-κB). Extrasystolic activity was present in 25% of the cells, but apart from that, Ca(2+) transients and ICaL were not affected by LPS; however, Na(+)/Ca(2+) exchanger (NCX) activity was apparently increased. We conclude that TLR4 activation decreased Ito, which increased AP duration via a MyD88-independent, IRF3-dependent pathway. The longer action potential, associated with enhanced Ca(2+) efflux via NCX, could explain the presence of arrhythmias in the LPS group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sigma phase is a deleterious one which can be formed in duplex stainless steels during heat treatment or welding. Aiming to accompany this transformation, ferrite and sigma percentage and hardness were measured on samples of a UNS S31803 duplex stainless steel submitted to heat treatment. These results were compared to measurements obtained from ultrasound and eddy current techniques, i.e., velocity and impedance, respectively. Additionally, backscattered signals produced by wave propagation were acquired during ultrasonic inspection as well as magnetic Barkhausen noise during magnetic inspection. Both signal types were processed via a combination of detrended-fluctuation analysis (DFA) and principal component analysis (PCA). The techniques used were proven to be sensitive to changes in samples related to sigma phase formation due to heat treatment. Furthermore, there is an advantage using these methods since they are nondestructive. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Importin alpha is the nuclear import receptor that recognizes classical monopartite and bipartite nuclear localization signals (NLSs). The structure of mouse importin alpha has been determined at 2.5 Angstrom resolution. The structure shows a large C-terminal domain containing armadillo repeats, and a less structured N-terminal importin beta-binding domain containing an internal NLS bound to the NLS-binding site. The structure explains the regulatory switch between the cytoplasmic, high-affinity form, and the nuclear, low-affinity form for NLS binding of the nuclear import receptor predicted by the current models of nuclear import. Importin beta conceivably converts the low- to high-affinity form by binding to a site overlapping the autoinhibitory sequence. The structure also has implications for understanding NLS recognition, and the structures of armadillo and HEAT repeats.