23 resultados para Collaborative learning and applications
em Universidad Polit
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
Social software tools have become an integral part of students? personal lives and their primary communication medium. Likewise, these tools are increasingly entering the enterprise world (within the recent trend known as Enterprise 2.0) and becoming a part of everyday work routines. Aiming to keep the pace with the job requirements and also to position learning as an integral part of students? life, the field of education is challenged to embrace social software. Personal Learning Environments (PLEs) emerged as a concept that makes use of social software to facilitate collaboration, knowledge sharing, group formation around common interests, active participation and reflective thinking in online learning settings. Furthermore, social software allows for establishing and maintaining one?s presence in the online world. By being aware of a student's online presence, a PLE is better able to personalize the learning settings, e.g., through recommendation of content to use or people to collaborate with. Aiming to explore the potentials of online presence for the provision of recommendations in PLEs, in the scope of the OP4L project, we have develop a software solution that is based on a synergy of Semantic Web technologies, online presence and socially-oriented learning theories. In this paper we present the current results of this research work.
Resumo:
Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.
Resumo:
This paper discusses a novel hybrid approach for text categorization that combines a machine learning algorithm, which provides a base model trained with a labeled corpus, with a rule-based expert system, which is used to improve the results provided by the previous classifier, by filtering false positives and dealing with false negatives. The main advantage is that the system can be easily fine-tuned by adding specific rules for those noisy or conflicting categories that have not been successfully trained. We also describe an implementation based on k-Nearest Neighbor and a simple rule language to express lists of positive, negative and relevant (multiword) terms appearing in the input text. The system is evaluated in several scenarios, including the popular Reuters-21578 news corpus for comparison to other approaches, and categorization using IPTC metadata, EUROVOC thesaurus and others. Results show that this approach achieves a precision that is comparable to top ranked methods, with the added value that it does not require a demanding human expert workload to train
Resumo:
A high resolution focused beam line has been recently installed on the AIFIRA (“Applications Interdisciplinaires des Faisceaux d’Ions en Région Aquitaine”) facility at CENBG. This nanobeam line, based on a doublet–triplet configuration of Oxford Microbeam Ltd. OM-50™ quadrupoles, offers the opportunity to focus protons, deuterons and alpha particles in the MeV energy range to a sub-micrometer beam spot. The beam optics design has been studied in detail and optimized using detailed ray-tracing simulations and the full mechanical design of the beam line was reported in the Debrecen ICNMTA conference in 2008. During the last two years, the lenses have been carefully aligned and the target chamber has been fully equipped with particle and X-ray detectors, microscopes and precise positioning stages. The beam line is now operational and has been used for its firstapplications to ion beam analysis. Interestingly, this set-up turned out to be a very versatile tool for a wide range of applications. Indeed, even if it was not intended during the design phase, the ion optics configuration offers the opportunity to work either with a high current microbeam (using the triplet only) or with a lower current beam presenting a sub-micrometer resolution (using the doublet–triplet configuration). The performances of the CENBGnanobeam line are presented for both configurations. Quantitative data concerning the beam lateral resolutions at different beam currents are provided. Finally, the firstresults obtained for different types of application are shown, including nuclear reaction analysis at the micrometer scale and the firstresults on biological samples
Resumo:
There are many the requirements that modern power converters should fulfill. Most of the applications where these converters are used, demand smaller converters with high efficiency, improved power density and a fast dynamic response. For instance, loads like microprocessors demand aggressive current steps with very high slew rates (100A/mus and higher); besides, during these load steps, the supply voltage of the microprocessor should be kept within tight limits in order to ensure its correct performance. The accomplishment of these requirements is not an easy task; complex solutions like advanced topologies - such as multiphase converters- as well as advanced control strategies are often needed. Besides, it is also necessary to operate the converter at high switching frequencies and to use capacitors with high capacitance and low ESR. Improving the dynamic response of power converters does not rely only on the control strategy but also the power topology should be suited to enable a fast dynamic response. Moreover, in later years, a fast dynamic response does not only mean accomplishing fast load steps but output voltage steps are gaining importance as well. At least, two applications that require fast voltage changes can be named: Low power microprocessors. In these devices, the voltage supply is changed according to the workload and the operating frequency of the microprocessor is changed at the same time. An important reduction in voltage dependent losses can be achieved with such changes. This technique is known as Dynamic Voltage Scaling (DVS). Another application where important energy savings can be achieved by means of changing the supply voltage are Radio Frequency Power Amplifiers. For example, RF architectures based on ‘Envelope Tracking’ and ‘Envelope Elimination and Restoration’ techniques can take advantage of voltage supply modulation and accomplish important energy savings in the power amplifier. However, in order to achieve these efficiency improvements, a power converter with high efficiency and high enough bandwidth (hundreds of kHz or even tens of MHz) is necessary in order to ensure an adequate supply voltage. The main objective of this Thesis is to improve the dynamic response of DC-DC converters from the point of view of the power topology. And the term dynamic response refers both to the load steps and the voltage steps; it is also interesting to modulate the output voltage of the converter with a specific bandwidth. In order to accomplish this, the question of what is it that limits the dynamic response of power converters should be answered. Analyzing this question leads to the conclusion that the dynamic response is limited by the power topology and specifically, by the filter inductance of the converter which is found in series between the input and the output of the converter. The series inductance is the one that determines the gain of the converter and provides the regulation capability. Although the energy stored in the filter inductance enables the regulation and the capability of filtering the output voltage, it imposes a limitation which is the concern of this Thesis. The series inductance stores energy and prevents the current from changing in a fast way, limiting the slew rate of the current through this inductor. Different solutions are proposed in the literature in order to reduce the limit imposed by the filter inductor. Many publications proposing new topologies and improvements to known topologies can be found in the literature. Also, complex control strategies are proposed with the objective of improving the dynamic response in power converters. In the proposed topologies, the energy stored in the series inductor is reduced; examples of these topologies are Multiphase converters, Buck converter operating at very high frequency or adding a low impedance path in parallel with the series inductance. Control techniques proposed in the literature, focus on adjusting the output voltage as fast as allowed by the power stage; examples of these control techniques are: hysteresis control, V 2 control, and minimum time control. In some of the proposed topologies, a reduction in the value of the series inductance is achieved and with this, the energy stored in this magnetic element is reduced; less stored energy means a faster dynamic response. However, in some cases (as in the high frequency Buck converter), the dynamic response is improved at the cost of worsening the efficiency. In this Thesis, a drastic solution is proposed: to completely eliminate the series inductance of the converter. This is a more radical solution when compared to those proposed in the literature. If the series inductance is eliminated, the regulation capability of the converter is limited which can make it difficult to use the topology in one-converter solutions; however, this topology is suitable for power architectures where the energy conversion is done by more than one converter. When the series inductor is eliminated from the converter, the current slew rate is no longer limited and it can be said that the dynamic response of the converter is independent from the switching frequency. This is the main advantage of eliminating the series inductor. The main objective, is to propose an energy conversion strategy that is done without series inductance. Without series inductance, no energy is stored between the input and the output of the converter and the dynamic response would be instantaneous if all the devices were ideal. If the energy transfer from the input to the output of the converter is done instantaneously when a load step occurs, conceptually it would not be necessary to store energy at the output of the converter (no output capacitor COUT would be needed) and if the input source is ideal, the input capacitor CIN would not be necessary. This last feature (no CIN with ideal VIN) is common to all power converters. However, when the concept is actually implemented, parasitic inductances such as leakage inductance of the transformer and the parasitic inductance of the PCB, cannot be avoided because they are inherent to the implementation of the converter. These parasitic elements do not affect significantly to the proposed concept. In this Thesis, it is proposed to operate the converter without series inductance in order to improve the dynamic response of the converter; however, on the other side, the continuous regulation capability of the converter is lost. It is said continuous because, as it will be explained throughout the Thesis, it is indeed possible to achieve discrete regulation; a converter without filter inductance and without energy stored in the magnetic element, is capable to achieve a limited number of output voltages. The changes between these output voltage levels are achieved in a fast way. The proposed energy conversion strategy is implemented by means of a multiphase converter where the coupling of the phases is done by discrete two-winding transformers instead of coupledinductors since transformers are, ideally, no energy storing elements. This idea is the main contribution of this Thesis. The feasibility of this energy conversion strategy is first analyzed and then verified by simulation and by the implementation of experimental prototypes. Once the strategy is proved valid, different options to implement the magnetic structure are analyzed. Three different discrete transformer arrangements are studied and implemented. A converter based on this energy conversion strategy would be designed with a different approach than the one used to design classic converters since an additional design degree of freedom is available. The switching frequency can be chosen according to the design specifications without penalizing the dynamic response or the efficiency. Low operating frequencies can be chosen in order to favor the efficiency; on the other hand, high operating frequencies (MHz) can be chosen in order to favor the size of the converter. For this reason, a particular design procedure is proposed for the ‘inductorless’ conversion strategy. Finally, applications where the features of the proposed conversion strategy (high efficiency with fast dynamic response) are advantageus, are proposed. For example, in two-stage power architectures where a high efficiency converter is needed as the first stage and there is a second stage that provides the fine regulation. Another example are RF power amplifiers where the voltage is modulated following an envelope reference in order to save power; in this application, a high efficiency converter, capable of achieving fast voltage steps is required. The main contributions of this Thesis are the following: The proposal of a conversion strategy that is done, ideally, without storing energy in the magnetic element. The validation and the implementation of the proposed energy conversion strategy. The study of different magnetic structures based on discrete transformers for the implementation of the proposed energy conversion strategy. To elaborate and validate a design procedure. To identify and validate applications for the proposed energy conversion strategy. It is important to remark that this work is done in collaboration with Intel. The particular features of the proposed conversion strategy enable the possibility of solving the problems related to microprocessor powering in a different way. For example, the high efficiency achieved with the proposed conversion strategy enables it as a good candidate to be used for power conditioning, as a first stage in a two-stage power architecture for powering microprocessors.
Resumo:
The full text of this article is available in the PDF provided.
Resumo:
The aim of this study is to evaluate the effects obtained after applying two active learning methodologies (cooperative learning and project based learning) to the achievement of the competence problem solving. This study was carried out at the Technical University of Madrid, where these methodologies were applied to two Operating Systems courses. The first hypothesis tested was whether the implementation of active learning methodologies favours the achievement of ?problem solving?. The second hypothesis was focused on testing if students with higher rates in problem solving competence obtain better results in their academic performance. The results indicated that active learning methodologies do not produce any significant change in the generic competence ?problem solving? during the period analysed. Concerning this, we consider that students should work with these methodologies for a longer period, besides having a specific training. Nevertheless, a close correlation between problem solving self appraisal and academic performance has been detected.
Resumo:
In the School of Mines of the Technical University of Madrid (UPM) the first course of different degrees has been implemented and adapted to the European Higher Educational Area (EHEA). In all of the degrees there is a first semester course which gathers all the contents of basic mechanics: from the first kinematics concepts to the rigid solid plane motion Before the Bologna process took place, the authors had established the final assessment of the theoretical contents through open questions of theoretical-practical character In the present work, the elaboration of a wide database containing theoretical-practical questions that students can access on line is presented. The questions are divided in thirteen different questionnaires composed of a number of questions randomly chosen from a certain group in the database. Each group corresponds to a certain learning objective that the student knows. After answering the questionnaire and checking the grade assigned according to the performance of the student, the pupils can see the correct response displayed on the screen and widely explained by the professors. This represents a 10% of the final grade. As the student can access the questionnaires as many times as they want, the main goal is the self-assessment of each learning objective and therefore, getting the students involved in their own learning process so they can decide how much time they need to acquire the required level.
Resumo:
Basic effects and dynamical and electrical contact issues in the physics of (electrodynamic space) bare tethers are discussed. Scientific experiments and powerpropulsion applications, including a paradoxical use of bare tethers in outer-planet exploration,are considered.
Resumo:
El objetivo principal del presente trabajo es estudiar y explotar estructuras que presentan un gas bidimensional de electrones (2DEG) basadas en compuestos nitruros con alto contenido de indio. Existen muchas preguntas abiertas, relacionadas con el nitruro de indio y sus aleaciones, algunas de las cuales se han abordado en este estudio. En particular, se han investigado temas relacionados con el análisis y la tecnología del material, tanto para el InN y heteroestructuras de InAl(Ga)N/GaN como para sus aplicaciones a dispositivos avanzados. Después de un análisis de la dependencia de las propiedades del InN con respecto a tratamientos de procesado de dispositivos (plasma y térmicos), el problema relacionado con la formación de un contacto rectificador es considerado. Concretamente, su dificultad es debida a la presencia de acumulación de electrones superficiales en la forma de un gas bidimensional de electrones, debido al pinning del nivel de Fermi. El uso de métodos electroquímicos, comparados con técnicas propias de la microelectrónica, ha ayudado para la realización de esta tarea. En particular, se ha conseguido lamodulación de la acumulación de electrones con éxito. En heteroestructuras como InAl(Ga)N/GaN, el gas bidimensional está presente en la intercara entre GaN y InAl(Ga)N, aunque no haya polarización externa (estructuras modo on). La tecnología relacionada con la fabricación de transistores de alta movilidad en modo off (E-mode) es investigada. Se utiliza un método de ataque húmedo mediante una solución de contenido alcalino, estudiando las modificaciones estructurales que sufre la barrera. En este sentido, la necesidad de un control preciso sobre el material atacado es fundamental para obtener una estructura recessed para aplicaciones a transistores, con densidad de defectos e inhomogeneidad mínimos. La dependencia de la velocidad de ataque de las propiedades de las muestras antes del tratamiento es observada y comentada. Se presentan también investigaciones relacionadas con las propiedades básicas del InN. Gracias al uso de una puerta a través de un electrolito, el desplazamiento de los picos obtenidos por espectroscopia Raman es correlacionado con una variación de la densidad de electrones superficiales. En lo que concierne la aplicación a dispositivos, debido al estado de la tecnología actual y a la calidad del material InN, todavía no apto para dispositivos, la tesis se enfoca a la aplicación de heteroestructuras de InAl(Ga)N/GaN. Gracias a las ventajas de una barrera muy fina, comparada con la tecnología de AlGaN/GaN, el uso de esta estructura es adecuado para aplicaciones que requieren una elevada sensibilidad, estando el canal 2DEG más cerca de la superficie. De hecho, la sensibilidad obtenida en sensores de pH es comparable al estado del arte en términos de variaciones de potencial superficial, y, debido al poco espesor de la barrera, la variación de la corriente con el pH puede ser medida sin necesidad de un electrodo de referencia externo. Además, estructuras fotoconductivas basadas en un gas bidimensional presentan alta ganancia debida al elevado campo eléctrico en la intercara, que induce una elevada fuerza de separación entre hueco y electrón generados por absorción de luz. El uso de metalizaciones de tipo Schottky (fotodiodos Schottky y metal-semiconductormetal) reduce la corriente de oscuridad, en comparación con los fotoconductores. Además, la barrera delgada aumenta la eficiencia de extracción de los portadores. En consecuencia, se obtiene ganancia en todos los dispositivos analizados basados en heteroestructuras de InAl(Ga)N/GaN. Aunque presentando fotoconductividad persistente (PPC), los dispositivos resultan más rápidos con respeto a los valores que se dan en la literatura acerca de PPC en sistemas fotoconductivos. ABSTRACT The main objective of the present work is to study and exploit the two-dimensionalelectron- gas (2DEG) structures based on In-related nitride compounds. Many open questions are analyzed. In particular, technology and material-related topics are the focus of interest regarding both InNmaterial and InAl(Ga)N/GaNheterostructures (HSs) as well as their application to advanced devices. After the analysis of the dependence of InN properties on processing treatments (plasma-based and thermal), the problemof electrical blocking behaviour is taken into consideration. In particular its difficulty is due to the presence of a surface electron accumulation (SEA) in the form of a 2DEG, due to Fermi level pinning. The use of electrochemical methods, compared to standard microelectronic techniques, helped in the successful realization of this task. In particular, reversible modulation of SEA is accomplished. In heterostructures such as InAl(Ga)N/GaN, the 2DEGis present at the interface between GaN and InAl(Ga)N even without an external bias (normally-on structures). The technology related to the fabrication of normally off (E-mode) high-electron-mobility transistors (HEMTs) is investigated in heterostructures. An alkali-based wet-etching method is analysed, standing out the structural modifications the barrier underwent. The need of a precise control of the etched material is crucial, in this sense, to obtain a recessed structure for HEMT application with the lowest defect density and inhomogeneity. The dependence of the etch rate on the as-grown properties is observed and commented. Fundamental investigation related to InNis presented, related to the physics of this degeneratematerial. With the help of electrolyte gating (EG), the shift in Raman peaks is correlated to a variation in surface eletron density. As far as the application to device is concerned, due to the actual state of the technology and material quality of InN, not suitable for working devices yet, the focus is directed to the applications of InAl(Ga)N/GaN HSs. Due to the advantages of a very thin barrier layer, compared to standard AlGaN/GaN technology, the use of this structure is suitable for high sensitivity applications being the 2DEG channel closer to the surface. In fact, pH sensitivity obtained is comparable to the state-of-the-art in terms of surface potential variations, and, due to the ultrathin barrier, the current variation with pH can be recorded with no need of the external reference electrode. Moreover, 2DEG photoconductive structures present a high photoconductive gain duemostly to the high electric field at the interface,and hence a high separation strength of photogenerated electron and hole. The use of Schottky metallizations (Schottky photodiode and metal-semiconductor-metal) reduce the dark current, compared to photoconduction, and the thin barrier helps to increase the extraction efficiency. Gain is obtained in all the device structures investigated. The devices, even if they present persistent photoconductivity (PPC), resulted faster than the standard PPC related decay values.
Learning and Assessing Competencies: New challenges for Mathematics in Engineering Degrees in Spain.
Resumo:
The introduction of new degrees adapted to the European Area of Higher Education (EAHE) has involved a radically different approach to the curriculum. The new programs are structured around competencies that should be acquired. Considering the competencies, teachers must define and develop learning objectives, design teaching methods and establish appropriate evaluation systems. While most Spanish universities have incorporated methodological innovations and evaluation systems different from traditional exams, there is enough confusion about how to teach and assess competencies and learning outcomes, as traditionally the teaching and assessment have focused on knowledge. In this paper we analyze the state-of-the-art in the mathematical courses of the new engineering degrees in some Spanish universities.
Resumo:
El presente trabajo trata de elementos reforzados con barras de armadura y Fibras Metálicas Recicladas (FMR). El objetivo principal es mejorar el comportamiento a fisuración de elementos sometidos a flexión pura y a flexión compuesta, aumentando en consecuencia las prestaciones en servicio de aquellas estructuras con requerimientos estrictos con respecto al control de fisuración. Entre éstas últimas se encuentran las estructuras integrales, es decir aquellas estructuras sin juntas (puentes o edificios), sometidas a cargas gravitatorias y deformaciones impuestas en los elementos horizontales debidas a retracción, fluencia y temperatura. Las FMR son obtenidas a partir de los neumáticos fuera de uso, y puesto que el procedimiento de reciclado se centra en el caucho en vez que en el acero, su forma es aleatoria y con longitud variable. A pesar de que la eficacia del fibrorefuerzo mediante FMR ha sido demostrada en investigaciones anteriores, la innovación que representa este trabajo consiste en proponer la acción combinada de barras convencionales y FMR en la mejora del comportamiento a fisuración. El objetivo es por tanto mejorar la sostenibilidad del proyecto de la estructura en HA al utilizar materiales reciclados por un lado, y aumentando por el otro la durabilidad. En primer lugar, se presenta el estado del arte con respecto a la fisuración en elementos de HA, que sucesivamente se amplía a elementos reforzados con barras y fibras. Asimismo, se resume el método simplificado para el análisis de columnas de estructuras sin juntas ya propuesto por Pérez et al., con particular énfasis en aquellos aspectos que son incompatibles con la acción de las fibras a nivel seccional. A continuación, se presenta un modelo para describir la deformabilidad seccional y la fisuración en elementos en HA, que luego se amplía a aquellos elementos reforzados con barras y fibras, teniendo en cuenta también los efectos debidos a la retracción (tension stiffening negativo). El modelo es luego empleado para ampliar el método simplificado para el análisis de columnas. La aportación consiste por tanto en contar con una metodología amplia de análisis para este tipo de elementos. Seguidamente, se presenta la campaña experimental preliminar que ha involucrado vigas a escala reducida sometidas a flexión simple, con el objetivo de validar la eficiencia y la usabilidad en el hormigón de las FMR de dos diferentes tipos, y su comportamiento con respecto a fibras de acero comerciales. Se describe a continuación la campaña principal, consistente en ensayos sobre ocho vigas en flexión simple a escala 1:1 (variando contenido en FRM, Ø/s,eff y recubrimiento) y doce columnas a flexión compuesta (variando contenido en FMR, Ø/s,eff y nivel de fuerza axil). Los resultados obtenidos en la campaña principal son presentados y comentados, resaltando las mejoras obtenidas en el comportamiento a fisuración de las vigas y columnas, y la rigidez estructural de las columnas. Estos resultados se comparan con las predicciones del modelo propuesto. Los principales parámetros estudiados para describir la fisuración y el comportamiento seccional de las vigas son: la separación entre fisuras, el alargamiento medio de las armaduras y la abertura de fisura, mientras que en los ensayos de las columnas se ha contrastado las leyes momento/curvatura, la tensión en las barras de armadura y la abertura de fisura en el empotramiento en la base. La comparación muestra un buen acuerdo entre las predicciones y los resultados experimentales. Asimismo, se nota la mejora en el comportamiento a fisuración debido a la incorporación de FMR en aquellos elementos con cuantías de armadura bajas en flexión simple, en elementos con axiles bajos y para el control de la fisuración en elementos con grandes recubrimientos, siendo por tanto resultados de inmediato impacto en la práctica ingenieril (diseño de losas, tanques, estructuras integrales, etc.). VIIIComo punto final, se presentan aplicaciones de las FMR en estructuras reales. Se discuten dos casos de elementos sometidos a flexión pura, en particular una viga simplemente apoyada y un tanque para el tratamiento de agua. En ambos casos la adicción de FMR al hormigón lleva a mejoras en el comportamiento a fisuración. Luego, utilizando el método simplificado para el análisis en servicio de columnas de estructuras sin juntas, se calcula la máxima longitud admisible en casos típicos de puentes y edificación. En particular, se demuestra que las limitaciones de la práctica ingenieril actual (sobre todo en edificación) pueden ser aumentadas considerando el comportamiento real de las columnas en HA. Finalmente, los mismos casos son modificados para considerar el uso de MFR, y se presentan las mejoras tanto en la máxima longitud admisible como en la abertura de fisura para una longitud y deformación impuesta. This work deals with elements reinforced with both rebars and Recycled Steel Fibres (RSFs). Its main objective is to improve cracking behaviour of elements subjected to pure bending and bending and axial force, resulting in better serviceability conditions for these structures demanding keen crack width control. Among these structures a particularly interesting type are the so-called integral structures, i.e. long jointless structures (bridges and buildings) subjected to gravitational loads and imposed deformations due to shrinkage, creep and temperature. RSFs are obtained from End of Life Tyres, and due to the recycling process that is focused on the rubber rather than on the steel they come out crooked and with variable length. Although the effectiveness of RSFs had already been proven by previous research, the innovation of this work consists in the proposing the combined action of conventional rebars and RSFs to improve cracking behaviour. Therefore, the objective is to improve the sustainability of RC structures by, on the one hand, using recycled materials, and on the other improving their durability. A state of the art on cracking in RC elements is firstly drawn. It is then expanded to elements reinforced with both rebars and fibres (R/FRC elements). Finally, the simplified method for analysis of columns of long jointless structures already proposed by Pérez et al. is resumed, with a special focus on the points that conflict when taking into account the action of fibres. Afterwards, a model to describe sectional deformability and cracking of R/FRC elements is presented, taking also into account the effect of shrinkage (negative tension stiffening). The model is then used to implement the simplified method for columns. The novelty represented by this is that a comprehensive methodology to analyse this type of elements is presented. A preliminary experimental campaign consisting in small beams subjected to pure bending is described, with the objective of validating the effectiveness and usability in concrete of RSFs of two different types, and their behaviour when compared with commercial steel fibres. With the results and lessons learnt from this campaign in mind, the main experimental campaign is then described, consisting in cracking tests of eight unscaled beams in pure bending (varying RSF content, Ø/s,eff and concrete cover) and twelve columns subjected to imposed displacement and axial force (varying RSF content, Ø/s,eff and squashing load ratio). The results obtained from the main campaign are presented and discussed, with particular focus on the improvement in cracking behaviour for the beams and columns, and structural stiffness for the columns. They are then compared with the proposed model. The main parameters studied to describe cracking and sectional behaviours of the beam tests are crack spacing, mean steel strain and crack width, while for the column tests these were moment/curvature, stress in rebars and crack with at column embedment. The comparison showed satisfactory agreement between experimental results and model predictions. Moreover, it is pointed out the improvement in cracking behaviour due to the addition of RSF for elements with low reinforcement ratios, elements with low squashing load ratios and for crack width control of elements with large concrete covers, thus representing results with a immediate impact in engineering practice (slab design, tanks, integral structures, etc.). Applications of RSF to actual structures are finally presented. Two cases of elements in pure bending are presented, namely a simple supported beam and a water treatment tank. In both cases the addition of RSF to concrete leads to improvements in cracking behaviour. Then, using the simplified model for the serviceability analysis of columns of jointless structures, the maximum achievable jointless length of typical cases of a bridge and building is obtained. In XIIparticular, it is shown how the limitations of current engineering practice (this is especially the case of buildings) can be increased by considering the actual behaviour of RC supports. Then, the same cases are modified considering the use of RSF, and the improvements both in maximum achievable length and in crack width for a given length and imposed strain at the deck/first floor are shown.
Resumo:
The Atomic Physics Group at the Institute of Nuclear Fusion (DENIM) in Spain has accumulated experience over the years in developing a collection of computational models and tools for determining some relevant microscopic properties of, mainly, ICF and laser-produced plasmas in a variety of conditions. In this work several applications of those models in determining some relevant microscopic properties are presented.
Resumo:
Innovation studies have been interest of not only the scholars from various fields such as economics, management and sociology but also industrial practitioners and policy makers. In this vast and fruitful field, the theory of diffusion of innovations, which has been driven by a sociological approach, has played a vital role in our understanding of the mechanisms behind industrial change. In this paper, our aim is to give a state of art review of diffusion of innovation models in a structural and conceptual way with special reference to photovoltaic. We argue firstly, as an underlying background, how diffusion of innovations theory differs from other innovation studies. Secondly we give a brief taxonomical review of modelling methodologies together with comparative discussions. And finally we put the wealth of modelling in the context of photovoltaic diffusion and suggest some future directions.