912 resultados para Technology and state
The Technology Gap and the Growth of the Firm: A Case Study of China's Mobile-phone Handset Industry
Resumo:
We have examined the way in which local Chinese firms confronted with a technology gap have achieved growth, using the Chinese handset industry as a case study. Chinese local firms have lacked technology, and have therefore turned to outside firms for development, design, and manufacturing, while they themselves have focused on sales and marketing, using their advantage of familiarity with the Chinese market. Consequently, by establishing a growth condition in which their selection of boundaries counterbalances the technology gap they have been able to expand their market share in comparison with foreign firms.
Resumo:
Developing countries are experiencing unprecedented levels of economic growth. As a result, they will be responsible for most of the future growth in energy demand and greenhouse gas (GHG) emissions. Curbing GHG emissions in developing countries has become one of the cornerstones of a future international agreement under the United Nations Framework Convention for Climate Change (UNFCCC). However, setting caps for developing countries’ GHG emissions has encountered strong resistance in the current round of negotiations. Continued economic growth that allows poverty eradication is still the main priority for most developing countries, and caps are perceived as a constraint to future growth prospects. The development, transfer and use of low-carbon technologies have more positive connotations, and are seen as the potential path towards low-carbon development. So far, the success of the UNFCCC process in improving the levels of technology transfer (TT) to developing countries has been limited. This thesis analyses the causes for such limited success and seeks to improve on the understanding about what constitutes TT in the field of climate change, establish the factors that enable them in developing countries and determine which policies could be implemented to reinforce these factors. Despite the wide recognition of the importance of technology and knowledge transfer to developing countries in the climate change mitigation policy agenda, this issue has not received sufficient attention in academic research. Current definitions of climate change TT barely take into account the perspective of actors involved in actual climate change TT activities, while respective measurements do not bear in mind the diversity of channels through which these happen and the outputs and effects that they convey. Furthermore, the enabling factors for TT in non-BRIC (Brazil, Russia, India, China) developing countries have been seldom investigated, and policy recommendations to improve the level and quality of TTs to developing countries have not been adapted to the specific needs of highly heterogeneous countries, commonly denominated as “developing countries”. This thesis contributes to enriching the climate change TT debate from the perspective of a smaller emerging economy (Chile) and by undertaking a quantitative analysis of enabling factors for TT in a large sample of developing countries. Two methodological approaches are used to study climate change TT: comparative case study analysis and quantitative analysis. Comparative case studies analyse TT processes in ten cases based in Chile, all of which share the same economic, technological and policy frameworks, thus enabling us to draw conclusions on the enabling factors and obstacles operating in TT processes. The quantitative analysis uses three methodologies – principal component analysis, multiple regression analysis and cluster analysis – to assess the performance of developing countries in a number of enabling factors and the relationship between these factors and indicators of TT, as well as to create groups of developing countries with similar performances. The findings of this thesis are structured to provide responses to four main research questions: What constitutes technology transfer and how does it happen? Is it possible to measure technology transfer, and what are the main challenges in doing so? Which factors enable climate change technology transfer to developing countries? And how do different developing countries perform in these enabling factors, and how can differentiated policy priorities be defined accordingly? vi Resumen Los paises en desarrollo estan experimentando niveles de crecimiento economico sin precedentes. Como consecuencia, se espera que sean responsables de la mayor parte del futuro crecimiento global en demanda energetica y emisiones de Gases de Efecto de Invernadero (GEI). Reducir las emisiones de GEI en los paises en desarrollo es por tanto uno de los pilares de un futuro acuerdo internacional en el marco de la Convencion Marco de las Naciones Unidas para el Cambio Climatico (UNFCCC). La posibilidad de compromisos vinculantes de reduccion de emisiones de GEI ha sido rechazada por los paises en desarrollo, que perciben estos limites como frenos a su desarrollo economico y a su prioridad principal de erradicacion de la pobreza. El desarrollo, transferencia y uso de tecnologias bajas en carbono tiene connotaciones mas positivas y se percibe como la via hacia un crecimiento bajo en carbono. Hasta el momento, la UNFCCC ha tenido un exito limitado en la promocion de transferencias de tecnologia (TT) a paises en desarrollo. Esta tesis analiza las causas de este resultado y busca mejorar la comprension sobre que constituye transferencia de tecnologia en el area de cambio climatico, cuales son los factores que la facilitan en paises en desarrollo y que politicas podrian implementarse para reforzar dichos factores. A pesar del extendido reconocimiento sobre la importancia de la transferencia de tecnologia a paises en desarrollo en la agenda politica de cambio climatico, esta cuestion no ha sido suficientemente atendida por la investigacion existente. Las definiciones actuales de transferencia de tecnologia relacionada con la mitigacion del cambio climatico no tienen en cuenta la diversidad de canales por las que se manifiestan o los efectos que consiguen. Los factores facilitadores de TT en paises en desarrollo no BRIC (Brasil, Rusia, India y China) apenas han sido investigados, y las recomendaciones politicas para aumentar el nivel y la calidad de la TT no se han adaptado a las necesidades especificas de paises muy heterogeneos aglutinados bajo el denominado grupo de "paises en desarrollo". Esta tesis contribuye a enriquecer el debate sobre la TT de cambio climatico con la perspectiva de una economia emergente de pequeno tamano (Chile) y el analisis cuantitativo de factores que facilitan la TT en una amplia muestra de paises en desarrollo. Se utilizan dos metodologias para el estudio de la TT a paises en desarrollo: analisis comparativo de casos de estudio y analisis cuantitativo basado en metodos multivariantes. Los casos de estudio analizan procesos de TT en diez casos basados en Chile, para derivar conclusiones sobre los factores que facilitan u obstaculizan el proceso de transferencia. El analisis cuantitativo multivariante utiliza tres metodologias: regresion multiple, analisis de componentes principales y analisis cluster. Con dichas metodologias se busca analizar el posicionamiento de diversos paises en cuanto a factores que facilitan la TT; las relaciones entre dichos factores e indicadores de transferencia tecnologica; y crear grupos de paises con caracteristicas similares que podrian beneficiarse de politicas similares para la promocion de la transferencia de tecnologia. Los resultados de la tesis se estructuran en torno a cuatro preguntas de investigacion: .Que es la transferencia de tecnologia y como ocurre?; .Es posible medir la transferencia de tecnologias de bajo carbono?; .Que factores facilitan la transferencia de tecnologias de bajo carbono a paises en desarrollo? y .Como se puede agrupar a los paises en desarrollo en funcion de sus necesidades politicas para la promocion de la transferencia de tecnologias de bajo carbono?
Resumo:
Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.
Resumo:
El objetivo principal del presente trabajo es estudiar y explotar estructuras que presentan un gas bidimensional de electrones (2DEG) basadas en compuestos nitruros con alto contenido de indio. Existen muchas preguntas abiertas, relacionadas con el nitruro de indio y sus aleaciones, algunas de las cuales se han abordado en este estudio. En particular, se han investigado temas relacionados con el análisis y la tecnología del material, tanto para el InN y heteroestructuras de InAl(Ga)N/GaN como para sus aplicaciones a dispositivos avanzados. Después de un análisis de la dependencia de las propiedades del InN con respecto a tratamientos de procesado de dispositivos (plasma y térmicos), el problema relacionado con la formación de un contacto rectificador es considerado. Concretamente, su dificultad es debida a la presencia de acumulación de electrones superficiales en la forma de un gas bidimensional de electrones, debido al pinning del nivel de Fermi. El uso de métodos electroquímicos, comparados con técnicas propias de la microelectrónica, ha ayudado para la realización de esta tarea. En particular, se ha conseguido lamodulación de la acumulación de electrones con éxito. En heteroestructuras como InAl(Ga)N/GaN, el gas bidimensional está presente en la intercara entre GaN y InAl(Ga)N, aunque no haya polarización externa (estructuras modo on). La tecnología relacionada con la fabricación de transistores de alta movilidad en modo off (E-mode) es investigada. Se utiliza un método de ataque húmedo mediante una solución de contenido alcalino, estudiando las modificaciones estructurales que sufre la barrera. En este sentido, la necesidad de un control preciso sobre el material atacado es fundamental para obtener una estructura recessed para aplicaciones a transistores, con densidad de defectos e inhomogeneidad mínimos. La dependencia de la velocidad de ataque de las propiedades de las muestras antes del tratamiento es observada y comentada. Se presentan también investigaciones relacionadas con las propiedades básicas del InN. Gracias al uso de una puerta a través de un electrolito, el desplazamiento de los picos obtenidos por espectroscopia Raman es correlacionado con una variación de la densidad de electrones superficiales. En lo que concierne la aplicación a dispositivos, debido al estado de la tecnología actual y a la calidad del material InN, todavía no apto para dispositivos, la tesis se enfoca a la aplicación de heteroestructuras de InAl(Ga)N/GaN. Gracias a las ventajas de una barrera muy fina, comparada con la tecnología de AlGaN/GaN, el uso de esta estructura es adecuado para aplicaciones que requieren una elevada sensibilidad, estando el canal 2DEG más cerca de la superficie. De hecho, la sensibilidad obtenida en sensores de pH es comparable al estado del arte en términos de variaciones de potencial superficial, y, debido al poco espesor de la barrera, la variación de la corriente con el pH puede ser medida sin necesidad de un electrodo de referencia externo. Además, estructuras fotoconductivas basadas en un gas bidimensional presentan alta ganancia debida al elevado campo eléctrico en la intercara, que induce una elevada fuerza de separación entre hueco y electrón generados por absorción de luz. El uso de metalizaciones de tipo Schottky (fotodiodos Schottky y metal-semiconductormetal) reduce la corriente de oscuridad, en comparación con los fotoconductores. Además, la barrera delgada aumenta la eficiencia de extracción de los portadores. En consecuencia, se obtiene ganancia en todos los dispositivos analizados basados en heteroestructuras de InAl(Ga)N/GaN. Aunque presentando fotoconductividad persistente (PPC), los dispositivos resultan más rápidos con respeto a los valores que se dan en la literatura acerca de PPC en sistemas fotoconductivos. ABSTRACT The main objective of the present work is to study and exploit the two-dimensionalelectron- gas (2DEG) structures based on In-related nitride compounds. Many open questions are analyzed. In particular, technology and material-related topics are the focus of interest regarding both InNmaterial and InAl(Ga)N/GaNheterostructures (HSs) as well as their application to advanced devices. After the analysis of the dependence of InN properties on processing treatments (plasma-based and thermal), the problemof electrical blocking behaviour is taken into consideration. In particular its difficulty is due to the presence of a surface electron accumulation (SEA) in the form of a 2DEG, due to Fermi level pinning. The use of electrochemical methods, compared to standard microelectronic techniques, helped in the successful realization of this task. In particular, reversible modulation of SEA is accomplished. In heterostructures such as InAl(Ga)N/GaN, the 2DEGis present at the interface between GaN and InAl(Ga)N even without an external bias (normally-on structures). The technology related to the fabrication of normally off (E-mode) high-electron-mobility transistors (HEMTs) is investigated in heterostructures. An alkali-based wet-etching method is analysed, standing out the structural modifications the barrier underwent. The need of a precise control of the etched material is crucial, in this sense, to obtain a recessed structure for HEMT application with the lowest defect density and inhomogeneity. The dependence of the etch rate on the as-grown properties is observed and commented. Fundamental investigation related to InNis presented, related to the physics of this degeneratematerial. With the help of electrolyte gating (EG), the shift in Raman peaks is correlated to a variation in surface eletron density. As far as the application to device is concerned, due to the actual state of the technology and material quality of InN, not suitable for working devices yet, the focus is directed to the applications of InAl(Ga)N/GaN HSs. Due to the advantages of a very thin barrier layer, compared to standard AlGaN/GaN technology, the use of this structure is suitable for high sensitivity applications being the 2DEG channel closer to the surface. In fact, pH sensitivity obtained is comparable to the state-of-the-art in terms of surface potential variations, and, due to the ultrathin barrier, the current variation with pH can be recorded with no need of the external reference electrode. Moreover, 2DEG photoconductive structures present a high photoconductive gain duemostly to the high electric field at the interface,and hence a high separation strength of photogenerated electron and hole. The use of Schottky metallizations (Schottky photodiode and metal-semiconductor-metal) reduce the dark current, compared to photoconduction, and the thin barrier helps to increase the extraction efficiency. Gain is obtained in all the device structures investigated. The devices, even if they present persistent photoconductivity (PPC), resulted faster than the standard PPC related decay values.
Resumo:
Innovation studies have been interest of not only the scholars from various fields such as economics, management and sociology but also industrial practitioners and policy makers. In this vast and fruitful field, the theory of diffusion of innovations, which has been driven by a sociological approach, has played a vital role in our understanding of the mechanisms behind industrial change. In this paper, our aim is to give a state of art review of diffusion of innovation models in a structural and conceptual way with special reference to photovoltaic. We argue firstly, as an underlying background, how diffusion of innovations theory differs from other innovation studies. Secondly we give a brief taxonomical review of modelling methodologies together with comparative discussions. And finally we put the wealth of modelling in the context of photovoltaic diffusion and suggest some future directions.
Resumo:
This paper presents the theoretical analysis of a storage integrated solar thermophotovoltaic (SISTPV) system operating in steady state. These systems combine thermophotovoltaic (TPV) technology and high temperature thermal storage phase-change materials (PCM) in the same unit, providing a great potential in terms of efficiency, cost reduction and storage energy density. The main attraction in the proposed system is its simplicity and modularity compared to conventional Concentrated Solar Power (CSP) technologies. This is mainly due to the absence of moving parts. In this paper we analyze the use of Silicon as the phase change material (PCM). Silicon is an excellent candidate because of its high melting point (1680 K) and its very high latent heat of fusion of 1800 kJ/kg, which is about ten times greater than the conventional PCMs like molten salts. For a simple system configuration, we have demonstrated that overall conversion efficiencies up to ?35% are approachable. Although higher efficiencies are expected by incorporating more advanced devices like multijunction TPV cells, narrow band selective emitters or adopting near-field TPV configurations as well as by enhancing the convective/conductive heat transfer within the PCM. In this paper, we also discuss about the optimum system configurations and provide the general guidelines for designing these systems. Preliminary estimates of night time operations indicate it is possible to achieve over 10 h of operation with a relatively small quantity of Silicon.
Resumo:
During the last decades, the photovoltaic (PV) modules and their associated architectural materials are increasingly being incorporated into the construction of the building envelope such as façade, roof and skylights in the urban centers. This paper analyzes the-state-of-the-art of the PV elements and construction materials which are advertised as BIPV-products at the most important companies in the world. For this purpose 136 companies and 445 PV elements have been investigated and analyzed from a technical and architectural point of view. Also, the study has been divided into two main groups according to industry which producing the product: BIPV-Modules, which comes from the PV modules manufacturers and consist of standard PV-modules with some variations in its aesthetic features, support or dimensions; and PV-Constructions Elements, which consist of conventional constructive elements with architectural features intentionally manufactured for photovoltaic integration. In advance for conclusions, the solar tile is the most common PV-constructions element, the Si-crystalline is the most widely used PV technology, and the BIPV-urban furniture is the fastest growing market experienced in recent years. However, it is clear the absences of innovative elements which meet at the same time both the constructive purpose as the quality standards of PV technology.
A methodology to analyze, design and implement very fast and robust controls of Buck-type converters
Resumo:
La electrónica digital moderna presenta un desafío a los diseñadores de sistemas de potencia. El creciente alto rendimiento de microprocesadores, FPGAs y ASICs necesitan sistemas de alimentación que cumplan con requirimientos dinámicos y estáticos muy estrictos. Específicamente, estas alimentaciones son convertidores DC-DC de baja tensión y alta corriente que necesitan ser diseñados para tener un pequeño rizado de tensión y una pequeña desviación de tensión de salida bajo transitorios de carga de una alta pendiente. Además, dependiendo de la aplicación, se necesita cumplir con otros requerimientos tal y como proveer a la carga con ”Escalado dinámico de tensión”, donde el convertidor necesitar cambiar su tensión de salida tan rápidamente posible sin sobreoscilaciones, o ”Posicionado Adaptativo de la Tensión” donde la tensión de salida se reduce ligeramente cuanto más grande sea la potencia de salida. Por supuesto, desde el punto de vista de la industria, las figuras de mérito de estos convertidores son el coste, la eficiencia y el tamaño/peso. Idealmente, la industria necesita un convertidor que es más barato, más eficiente, más pequeño y que aún así cumpla con los requerimienos dinámicos de la aplicación. En este contexto, varios enfoques para mejorar la figuras de mérito de estos convertidores se han seguido por la industria y la academia tales como mejorar la topología del convertidor, mejorar la tecnología de semiconducores y mejorar el control. En efecto, el control es una parte fundamental en estas aplicaciones ya que un control muy rápido hace que sea más fácil que una determinada topología cumpla con los estrictos requerimientos dinámicos y, consecuentemente, le da al diseñador un margen de libertar más amplio para mejorar el coste, la eficiencia y/o el tamaño del sistema de potencia. En esta tesis, se investiga cómo diseñar e implementar controles muy rápidos para el convertidor tipo Buck. En esta tesis se demuestra que medir la tensión de salida es todo lo que se necesita para lograr una respuesta casi óptima y se propone una guía de diseño unificada para controles que sólo miden la tensión de salida Luego, para asegurar robustez en controles muy rápidos, se proponen un modelado y un análisis de estabilidad muy precisos de convertidores DC-DC que tienen en cuenta circuitería para sensado y elementos parásitos críticos. También, usando este modelado, se propone una algoritmo de optimización que tiene en cuenta las tolerancias de los componentes y sensados distorsionados. Us ando este algoritmo, se comparan controles muy rápidos del estado del arte y su capacidad para lograr una rápida respuesta dinámica se posiciona según el condensador de salida utilizado. Además, se propone una técnica para mejorar la respuesta dinámica de los controladores. Todas las propuestas se han corroborado por extensas simulaciones y prototipos experimentales. Con todo, esta tesis sirve como una metodología para ingenieros para diseñar e implementar controles rápidos y robustos de convertidores tipo Buck. ABSTRACT Modern digital electronics present a challenge to designers of power systems. The increasingly high-performance of microprocessors, FPGAs (Field Programmable Gate Array) and ASICs (Application-Specific Integrated Circuit) require power supplies to comply with very demanding static and dynamic requirements. Specifically, these power supplies are low-voltage/high-current DC-DC converters that need to be designed to exhibit low voltage ripple and low voltage deviation under high slew-rate load transients. Additionally, depending on the application, other requirements need to be met such as to provide to the load ”Dynamic Voltage Scaling” (DVS), where the converter needs to change the output voltage as fast as possible without underdamping, or ”Adaptive Voltage Positioning” (AVP) where the output voltage is slightly reduced the greater the output power. Of course, from the point of view of the industry, the figures of merit of these converters are the cost, efficiency and size/weight. Ideally, the industry needs a converter that is cheaper, more efficient, smaller and that can still meet the dynamic requirements of the application. In this context, several approaches to improve the figures of merit of these power supplies are followed in the industry and academia such as improving the topology of the converter, improving the semiconductor technology and improving the control. Indeed, the control is a fundamental part in these applications as a very fast control makes it easier for the topology to comply with the strict dynamic requirements and, consequently, gives the designer a larger margin of freedom to improve the cost, efficiency and/or size of the power supply. In this thesis, how to design and implement very fast controls for the Buck converter is investigated. This thesis proves that sensing the output voltage is all that is needed to achieve an almost time-optimal response and a unified design guideline for controls that only sense the output voltage is proposed. Then, in order to assure robustness in very fast controls, a very accurate modeling and stability analysis of DC-DC converters is proposed that takes into account sensing networks and critical parasitic elements. Also, using this modeling approach, an optimization algorithm that takes into account tolerances of components and distorted measurements is proposed. With the use of the algorithm, very fast analog controls of the state-of-the-art are compared and their capabilities to achieve a fast dynamic response are positioned de pending on the output capacitor. Additionally, a technique to improve the dynamic response of controllers is also proposed. All the proposals are corroborated by extensive simulations and experimental prototypes. Overall, this thesis serves as a methodology for engineers to design and implement fast and robust controls for Buck-type converters.
Resumo:
LINCOLN UNIVERSITY - On March 25, 1965, a bus loaded with Lincoln University students and staff arrived in Montgomery, Ala. to join the Selma march for racial and voting equality. Although the Civil Rights Act of 1964 was in force, African-Americans continued to feel the effects of segregation. The 1960s was a decade of social unrest and change. In the Deep South, specifically Alabama, racial segregation was a cultural norm resistant to change. Governor George Wallace never concealed his personal viewpoints and political stance of the white majority, declaring “Segregation now, segregation tomorrow, segregation forever.” The march was aimed at obtaining African-Americans their constitutionally protected right to vote. However, Alabama’s deep-rooted culture of racial bias began to be challenged by a shift in American attitudes towards equality. Both black and whites wanted to end discrimination by using passive resistance, a movement utilized by Dr. Martin Luther King Jr. That passive resistance was often met with violence, sometimes at the hands of law enforcement and local citizens. The Selma to Montgomery march was a result of a protest for voting equality. The Student Nonviolent Coordinating Committee (SNCC) and the Southern Christian Leadership Counsel (SCLC) among other students marched along the streets to bring awareness to the voter registration campaign, which was organized to end discrimination in voting based on race. Violent acts of police officers and others were some of the everyday challenges protesters were facing. Forty-one participants from Lincoln University arrived in Montgomery to take part in the 1965 march for equality. Students from Lincoln University’s Journalism 383 class spent part of their 2015 spring semester researching the historical event. Here are their stories: Peter Kellogg “We’ve been watching the television, reading about it in the newspapers,” said Peter Kellogg during a February 2015 telephone interview. “Everyone knew the civil rights movement was going on, and it was important that we give him (Robert Newton) some assistance … and Newton said we needed to get involve and do something,” Kellogg, a lecturer in the 1960s at Lincoln University, discussed how the bus trip originated. “That’s why the bus happened,” Kellogg said. “Because of what he (Newton) did - that’s why Lincoln students went and participated.” “People were excited and the people along the sidewalk were supportive,” Kellogg said. However, the mood flipped from excited to scared and feeling intimidated. “It seems though every office building there was a guy in a blue uniform with binoculars standing in the crowd with troops and police. And if looks could kill me, we could have all been dead.” He says the hatred and intimidation was intense. Kellogg, being white, was an immediate target among many white people. He didn’t realize how dangerous the event in Alabama was until he and the others in the bus heard about the death of Viola Liuzzo. The married mother of five from Detroit was shot and killed by members of the Ku Klux Klan while shuttling activists to the Montgomery airport. “We found out about her death on the ride back,” Kellogg recalled. “Because it was a loss of life, and it shows the violence … we could have been exposed to that danger!” After returning to LU, Kellogg’s outlook on life took a dramatic turn. Kellogg noted King’s belief that a person should be willing to die for important causes. “The idea is that life is about something larger and more important than your own immediate gratification, and career success or personal achievements,” Kellogg said. “The civil rights movement … it made me, it made my life more significant because it was about something important.” The civil rights movement influenced Kellogg to change his career path and to become a black history lecturer. Until this day, he has no regrets and believes that his choices made him as a better individual. The bus ride to Alabama, he says, began with the actions of just one student. Robert Newton Robert Newton was the initiator, recruiter and leader of the Lincoln University movement to join Dr. Martin Luther King’s march in Selma. “In the 60s much of the civil rights activists came out of college,” said Newton during a recent phone interview. Many of the events that involved segregation compelled college students to fight for equality. “We had selected boycotts of merchants, when blacks were not allowed to try on clothes,” Newton said. “You could buy clothes at department stores, but no blacks could work at the department stores as sales people. If you bought clothes there you couldn’t try them on, you had to buy them first and take them home and try them on.” Newton said the students risked their lives to be a part of history and influence change. He not only recognized the historic event of his fellow Lincolnites, but also recognized other college students and historical black colleges and universities who played a vital role in history. “You had the S.N.C.C organization, in terms of voting rights and other things, including a lot of participation and working off the bureau,” Newton said. Other schools and places such as UNT, Greenville and Howard University and other historically black schools had groups that came out as leaders. Newton believes that much has changed from 50 years ago. “I think we’ve certainly come a long way from what I’ve seen from the standpoint of growing up outside of Birmingham, Alabama,” Newton said. He believes that college campuses today are more organized in their approach to social causes. “The campus appears to be some more integrated amongst students in terms of organizations and friendships.” Barbara Flint Dr. Barbara Flint grew up in the southern part of Arkansas and came to Lincoln University in 1961. She describes her experience at Lincoln as “being at Lincoln when the world was changing.“ She was an active member of Lincoln’s History Club, which focused on current events and issues and influenced her decision to join the Selma march. “The first idea was to raise some money and then we started talking about ‘why can’t we go?’ I very much wanted to be a living witness in history.” Reflecting on the march and journey to Montgomery, Flint describes it as being filled with tension. “We were very conscious of the fact that once we got on the road past Tennessee we didn’t know what was going to happen,” said Flint during a February 2015 phone interview. “Many of the students had not been beyond Missouri, so they didn’t have that sense of what happens in the South. Having lived there you knew the balance as well as what is likely to happen and what is not likely to happen. As my father use to say, ‘you have to know how to stay on that line of balance.’” Upon arriving in Alabama she remembers the feeling of excitement and relief from everyone on the bus. “We were tired and very happy to be there and we were trying to figure out where we were going to join and get into the march,” Flint said. “There were so many people coming in and then we were also trying to stay together; that was one of the things that really stuck out for me, not just for us but the people who were coming in. You didn’t want to lose sight of the people you came with.” Flint says she was keenly aware of her surroundings. For her, it was more than just marching forward. “I can still hear those helicopters now,” Flint recalled. “Every time the helicopters would come over the sound would make people jump and look up - I think that demonstrated the extent of the tenseness that was there at the time because the helicopters kept coming over every few minutes.” She said that the marchers sang “we are not afraid,” but that fear remained with every step. “Just having been there and being a witness and marching you realize that I’m one of those drops that’s going to make up this flood and with this flood things will move,” said Flint. As a student at Lincoln in 1965, Flint says the Selma experience undoubtedly changed her life. “You can’t expect to do exactly what you came to Lincoln to do,” Flint says. “That march - along with all the other marchers and the action that was taking place - directly changed the paths that I and many other people at Lincoln would take.” She says current students and new generations need to reflect on their personal role in society. “Decide what needs to be done and ask yourself ‘how can I best contribute to it?’” Flint said. She notes technology and social media can be used to reach audiences in ways unavailable to her generation in 1965. “So you don’t always have to wait for someone else to step out there and say ‘let’s march,’ you can express your vision and your views and you have the means to do so (so) others can follow you. Jaci Newsom Jaci Newsom came to Lincoln in 1965 from Atlanta. She came to Lincoln to major in sociology and being in Jefferson City was largely different from what she had grown up with. “To be able to come into a restaurant, sit down and be served a nice meal was eye-opening to me,” said Newsom during a recent interview. She eventually became accustomed to the relaxed attitude of Missouri and was shocked by the situation she encountered on an out-of-town trip. “I took a bus trip from Atlanta to Pensacola and I encountered the worse racism that I have ever seen. I was at bus stop, I went in to be served and they would not serve me. There was a policeman sitting there at the table and he told me that privately owned places could select not to serve you.” Newsom describes her experience of marching in Montgomery as being one with a purpose. “We felt as though we achieved something - we felt a sense of unity,” Newsom said. “We were very excited (because) we were going to hear from Martin Luther King. To actually be in the presence of him and the other civil rights workers there was just such enthusiasm and excitement yet there was also some apprehension of what we might encounter.” Many of the marchers showed their inspiration and determination while pressing forward towards the grounds of the Alabama Capitol building. Newsom recalled that the marchers were singing the lyrics “ain’t gonna let nobody turn me around” and “we shall overcome.” “ I started seeing people just like me,” Newsom said. “I don’t recall any of the scowling, the hitting, the things I would see on TV later. I just saw a sea of humanity marching towards the Capitol. I don’t remember what Martin Luther King said but it was always the same message: keep the faith; we’re going to get where we’re going and let us remember what our purpose is.” Newsom offers advice on what individuals can do to make their society a more productive and peaceful place. “We have come a long way and we have ways to change things that we did not have before,” Newsom said. “You need to work in positive ways to change.” Referencing the recent unrest in Ferguson, Mo., she believes that people become destructive as a way to show and vent anger. Her generation, she says, was raised to react in lawful ways – and believe in hope. “We have faith to do things in a way that was lawful and it makes me sad what people do when they feel without hope, and there is hope,” Newsom says. “Non-violence does work - we need to include everyone to make this world a better place.” Newsom graduated from Lincoln in 1969 and describes her experience at Lincoln as, “I grew up and did more growing at Lincoln than I think I did for the rest of my life.”
Resumo:
This paper describes a range of opportunities for military and government applications of human-machine communication by voice, based on visits and contacts with numerous user organizations in the United States. The applications include some that appear to be feasible by careful integration of current state-of-the-art technology and others that will require a varying mix of advances in speech technology and in integration of the technology into applications environments. Applications that are described include (1) speech recognition and synthesis for mobile command and control; (2) speech processing for a portable multifunction soldier's computer; (3) speech- and language-based technology for naval combat team tactical training; (4) speech technology for command and control on a carrier flight deck; (5) control of auxiliary systems, and alert and warning generation, in fighter aircraft and helicopters; and (6) voice check-in, report entry, and communication for law enforcement agents or special forces. A phased approach for transfer of the technology into applications is advocated, where integration of applications systems is pursued in parallel with advanced research to meet future needs.
Innovative analytical strategies for the development of sensor devices and mass spectrometry methods
Resumo:
Il lavoro presentato in questa tesi di Dottorato è incentrato sullo sviluppo di strategie analitiche innovative basate sulla sensoristica e su tecniche di spettrometria di massa in ambito biologico e della sicurezza alimentare. Il primo capitolo tratta lo studio di aspetti metodologici ed applicativi di procedure sensoristiche per l’identificazione e la determinazione di biomarkers associati alla malattia celiaca. In tale ambito, sono stati sviluppati due immunosensori, uno a trasduzione piezoelettrica e uno a trasduzione amperometrica, per la rivelazione di anticorpi anti-transglutaminasi tissutale associati a questa malattia. L’innovazione di questi dispositivi riguarda l’immobilizzazione dell’enzima tTG nella conformazione aperta (Open-tTG), che è stato dimostrato essere quella principalmente coinvolta nella patogenesi. Sulla base dei risultati ottenuti, entrambi i sistemi sviluppati si sono dimostrati una valida alternativa ai test di screening attualmente in uso per la diagnosi della celiachia. Rimanendo sempre nel contesto della malattia celiaca, ulteriore ricerca oggetto di questa tesi di Dottorato, ha riguardato lo sviluppo di metodi affidabili per il controllo di prodotti “gluten-free”. Il secondo capitolo tratta lo sviluppo di un metodo di spettrometria di massa e di un immunosensore competitivo per la rivelazione di prolammine in alimenti “gluten-free”. E’ stato sviluppato un metodo LC-ESI-MS/MS basato su un’analisi target con modalità di acquisizione del segnale selected reaction monitoring per l’identificazione di glutine in diversi cereali potenzialmente tossici per i celiaci. Inoltre ci si è focalizzati su un immunosensore competitivo per la rivelazione di gliadina, come metodo di screening rapido di farine. Entrambi i sistemi sono stati ottimizzati impiegando miscele di farina di riso addizionata di gliadina, avenine, ordeine e secaline nel caso del sistema LC-MS/MS e con sola gliadina nel caso del sensore. Infine i sistemi analitici sono stati validati analizzando sia materie prime (farine) che alimenti (biscotti, pasta, pane, etc.). L’approccio sviluppato in spettrometria di massa apre la strada alla possibilità di sviluppare un test di screening multiplo per la valutazione della sicurezza di prodotti dichiarati “gluten-free”, mentre ulteriori studi dovranno essere svolti per ricercare condizioni di estrazione compatibili con l’immunosaggio competitivo, per ora applicabile solo all’analisi di farine estratte con etanolo. Terzo capitolo di questa tesi riguarda lo sviluppo di nuovi metodi per la rivelazione di HPV, Chlamydia e Gonorrhoeae in fluidi biologici. Si è scelto un substrato costituito da strips di carta in quanto possono costituire una valida piattaforma di rivelazione, offrendo vantaggi grazie al basso costo, alla possibilità di generare dispositivi portatili e di poter visualizzare il risultato visivamente senza la necessità di strumentazioni. La metodologia sviluppata è molto semplice, non prevede l’uso di strumentazione complessa e si basa sull’uso della isothermal rolling-circle amplification per l’amplificazione del target. Inoltre, di fondamentale importanza, è l’utilizzo di nanoparticelle colorate che, essendo state funzionalizzate con una sequenza di DNA complementare al target amplificato derivante dalla RCA, ne permettono la rivelazione a occhio nudo mediante l’uso di filtri di carta. Queste strips sono state testate su campioni reali permettendo una discriminazione tra campioni positivi e negativi in tempi rapidi (10-15 minuti), aprendo una nuova via verso nuovi test altamente competitivi con quelli attualmente sul mercato.
Resumo:
Teaching architecture is experiencing a moment of opportunity. New methods, like constructivist pedagogy, based on complexity and integration are yet to be explored. In this context of opportunity teaching architecture has a duty to integrate complexity in their curriculum. Teaching methods should also assume inherent indeterminacy and contingency of all complex process. If we accept this condition as part of any teaching method, the notion of truth or falsehood it becomes irrelevant. In this regard it could focus on teaching to contingency of language. Traditionally, technology is defined as the language of science. If we assume contingency as one of the characteristics of language, we could say that technology is also contingent. Therefore we could focus technology teaching to redefine its own vocabulary. So, redefining technological vocabulary could be an area of opportunity for education in architecture. The student could redefine their own tools, technology, to later innovate with them. First redefine the vocabulary, the technology, and then construct the new language, the technique. In the case of Building Technology subjects, it should also incorporate a more holistic approach for enhancing interdisciplinary transfer. Technical transfer, either from nature or other technologies to the field of architecture, is considered as a field of great educational possibilities. Evenmore, student get much broader technical approach that transgresses the boundaries of architectural discipline.
Resumo:
We explore the role of business services in knowledge accumulation and growth and the determinants of knowledge diffusion including the role of distance. A continuous time model is estimated on several European countries, Japan, and the US. Policy simulations illustrate the benefits for EU growth of the deepening of the single market, the reduction of regulatory barriers, and the accumulation of technology and human capital. Our results support the basic insights of the Lisbon Agenda. Economic growth in Europe is enhanced to the extent that: trade in services increases, technology accumulation and diffusion increase, regulation becomes both less intensive and more uniform across countries, and human capital accumulation increases in all countries.
Resumo:
With a growing number of threats to governance in the international system that result from globalization and technological innovation, it is no surprise that states have come to rely more heavily on each other and the global community for support. While the EU is partially constrained by the ultimate outcome of its own integration process, limited knowledge on this issue, and the national interests of its Member States, other governments are also experiencing difficulty in domestic implementation of international resolutions. To better understand the impact of the most recent sanctioning efforts, this paper will explore the development of the non-proliferation regime, examine implementation mechanisms of non-proliferation agreements, and analyze the impact of increased cooperation among states to thwart the spread of WMD technology and material. Case studies of unilateral measures undertaken by the US and EU against Iran will provide insight into the political and economic implications of economic sanctions from individual governments. New and emerging methods for limiting rogue states and non-state actors from acquiring the means to develop WMD will also be discussed in an effort to further discussion for future policy debates on this critical topic.