26 resultados para software implementation
em Universidad Politécnica de Madrid
Resumo:
Usability is the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions. Many studies demonstrate the benefits of usability, yet to this day software products continue to exhibit consistently low levels of this quality attribute. Furthermore, poor usability in software systems contributes largely to software failing in actual use. One of the main disciplines involved in usability is that of Human-Computer Interaction (HCI). Over the past two decades the HCI community has proposed specific features that should be present in applications to improve their usability, yet incorporating them into software continues to be far from trivial for software developers. These difficulties are due to multiple factors, including the high level of abstraction at which these HCI recommendations are made and how far removed they are from actual software implementation. In order to bridge this gap, the Software Engineering community has long proposed software design solutions to help developers include usability features into software, however, the problem remains an open research question. This doctoral thesis addresses the problem of helping software developers include specific usability features into their applications by providing them with a structured and tangible guidance in the form of a process, which we have termed the Usability-Oriented Software Development Process. This process is supported by a set of Software Usability Guidelines that help developers to incorporate a set of eleven usability features with high impact on software design. After developing the Usability-oriented Software Development Process and the Software Usability Guidelines, they have been validated across multiple academic projects and proven to help software developers to include such usability features into their software applications. In doing so, their use significantly reduced development time and improved the quality of the resulting designs of these projects. Furthermore, in this work we propose a software tool to automate the application of the proposed process. In sum, this work contributes to the integration of the Software Engineering and HCI disciplines providing a framework that helps software developers to create usable applications in an efficient way.
Resumo:
El presente proyecto fin de carrera consiste en el diseño, desarrollo e implementación de una aplicación informática cuya función sea la identificación de distintos ficheros de imagen, audio y video y la interpretación y presentación de los metadatos asociados a los mismos. El software desarrollado, EXTRACTORDATOS_LBS, reconocerá el tipo de formato del fichero bajo estudio a partir del análisis de los bytes de identificación contenidos en la cabecera del archivo. En base a la información registrada en dicha cabecera, la aplicación interpretará el contenido de los metadatos asociados al fichero, mostrando por pantalla aquellos que resulten de interés para el análisis de los mismos. Previamente a la implementación del software se acomete el análisis teórico de los formatos de diversos archivos multimedia, recogidos en múltiples normas y recomendaciones. Tras esa identificación, se procede al desarrollo de la aplicación EXTRACTORDATOS_LBS , que informa de los parámetros de interés contenidos en las cabeceras de los archivos. El desarrollo se ilustra con los diagramas conceptuales asociados a la arquitectura del software implementado. De igual forma, se muestran las salidas por pantalla de una serie de ficheros de muestra, y se presenta el manual de usuario de la aplicación. La versión electrónica de este documento acompaña el ejecutable que permite el análisis de los archivos. This final project consists in the design, development and implementation of a computer application whose function is the identification of different image, audio and video files and the interpretation and presentation of their metadata. The software developed, EXTRACTORDATOS_LBS, will recognize the type of the file under study through the analysis of the identification bytes contained on the file’s header. Based on information registered in this header, the application will interpret the metadata content associated to file, displaying the most interesting ones for their analysis. Prior to the software implementation, a theoretical analysis of the different formats of media files is undertaken. After this identification, the application EXTRACTORDATOS_LBS is developed. This software analyzes and displays the most interesting parameters contained in multimedia file’s header. The development of the application is illustrated with flow charts associated to the architecture of the software. Furthermore, some graphic examples of use of the program are included, as well as the user’s manual. The electronic version of this document attaches the executable file that permits file analysis.
Resumo:
Abstract The creation of atlases, or digital models where information from different subjects can be combined, is a field of increasing interest in biomedical imaging. When a single image does not contain enough information to appropriately describe the organism under study, it is then necessary to acquire images of several individuals, each of them containing complementary data with respect to the rest of the components in the cohort. This approach allows creating digital prototypes, ranging from anatomical atlases of human patients and organs, obtained for instance from Magnetic Resonance Imaging, to gene expression cartographies of embryo development, typically achieved from Light Microscopy. Within such context, in this PhD Thesis we propose, develop and validate new dedicated image processing methodologies that, based on image registration techniques, bring information from multiple individuals into alignment within a single digital atlas model. We also elaborate a dedicated software visualization platform to explore the resulting wealth of multi-dimensional data and novel analysis algo-rithms to automatically mine the generated resource in search of bio¬logical insights. In particular, this work focuses on gene expression data from developing zebrafish embryos imaged at the cellular resolution level with Two-Photon Laser Scanning Microscopy. Disposing of quantitative measurements relating multiple gene expressions to cell position and their evolution in time is a fundamental prerequisite to understand embryogenesis multi-scale processes. However, the number of gene expressions that can be simultaneously stained in one acquisition is limited due to optical and labeling constraints. These limitations motivate the implementation of atlasing strategies that can recreate a virtual gene expression multiplex. The developed computational tools have been tested in two different scenarios. The first one is the early zebrafish embryogenesis where the resulting atlas constitutes a link between the phenotype and the genotype at the cellular level. The second one is the late zebrafish brain where the resulting atlas allows studies relating gene expression to brain regionalization and neurogenesis. The proposed computational frameworks have been adapted to the requirements of both scenarios, such as the integration of partial views of the embryo into a whole embryo model with cellular resolution or the registration of anatom¬ical traits with deformable transformation models non-dependent on any specific labeling. The software implementation of the atlas generation tool (Match-IT) and the visualization platform (Atlas-IT) together with the gene expression atlas resources developed in this Thesis are to be made freely available to the scientific community. Lastly, a novel proof-of-concept experiment integrates for the first time 3D gene expression atlas resources with cell lineages extracted from live embryos, opening up the door to correlate genetic and cellular spatio-temporal dynamics. La creación de atlas, o modelos digitales, donde la información de distintos sujetos puede ser combinada, es un campo de creciente interés en imagen biomédica. Cuando una sola imagen no contiene suficientes datos como para describir apropiadamente el organismo objeto de estudio, se hace necesario adquirir imágenes de varios individuos, cada una de las cuales contiene información complementaria respecto al resto de componentes del grupo. De este modo, es posible crear prototipos digitales, que pueden ir desde atlas anatómicos de órganos y pacientes humanos, adquiridos por ejemplo mediante Resonancia Magnética, hasta cartografías de la expresión genética del desarrollo de embrionario, típicamente adquiridas mediante Microscopía Optica. Dentro de este contexto, en esta Tesis Doctoral se introducen, desarrollan y validan nuevos métodos de procesado de imagen que, basándose en técnicas de registro de imagen, son capaces de alinear imágenes y datos provenientes de múltiples individuos en un solo atlas digital. Además, se ha elaborado una plataforma de visualization específicamente diseñada para explorar la gran cantidad de datos, caracterizados por su multi-dimensionalidad, que resulta de estos métodos. Asimismo, se han propuesto novedosos algoritmos de análisis y minería de datos que permiten inspeccionar automáticamente los atlas generados en busca de conclusiones biológicas significativas. En particular, este trabajo se centra en datos de expresión genética del desarrollo embrionario del pez cebra, adquiridos mediante Microscopía dos fotones con resolución celular. Disponer de medidas cuantitativas que relacionen estas expresiones genéticas con las posiciones celulares y su evolución en el tiempo es un prerrequisito fundamental para comprender los procesos multi-escala característicos de la morfogénesis. Sin embargo, el número de expresiones genéticos que pueden ser simultáneamente etiquetados en una sola adquisición es reducido debido a limitaciones tanto ópticas como del etiquetado. Estas limitaciones requieren la implementación de estrategias de creación de atlas que puedan recrear un multiplexado virtual de expresiones genéticas. Las herramientas computacionales desarrolladas han sido validadas en dos escenarios distintos. El primer escenario es el desarrollo embrionario temprano del pez cebra, donde el atlas resultante permite constituir un vínculo, a nivel celular, entre el fenotipo y el genotipo de este organismo modelo. El segundo escenario corresponde a estadios tardíos del desarrollo del cerebro del pez cebra, donde el atlas resultante permite relacionar expresiones genéticas con la regionalización del cerebro y la formación de neuronas. La plataforma computacional desarrollada ha sido adaptada a los requisitos y retos planteados en ambos escenarios, como la integración, a resolución celular, de vistas parciales dentro de un modelo consistente en un embrión completo, o el alineamiento entre estructuras de referencia anatómica equivalentes, logrado mediante el uso de modelos de transformación deformables que no requieren ningún marcador específico. Está previsto poner a disposición de la comunidad científica tanto la herramienta de generación de atlas (Match-IT), como su plataforma de visualización (Atlas-IT), así como las bases de datos de expresión genética creadas a partir de estas herramientas. Por último, dentro de la presente Tesis Doctoral, se ha incluido una prueba conceptual innovadora que permite integrar los mencionados atlas de expresión genética tridimensionales dentro del linaje celular extraído de una adquisición in vivo de un embrión. Esta prueba conceptual abre la puerta a la posibilidad de correlar, por primera vez, las dinámicas espacio-temporales de genes y células.
Resumo:
En todo proceso de desarrollo de un dispositivo electrónico o equipo cabe la necesidad de evaluar la fiabilidad de sus componentes, es decir, cual es el porcentaje de equipos que tras un determinado periodo de vida mantiene todas sus funcionalidades dentro de especificaciones. La evaluación de la fiabilidad mediante ensayos acelerados es la herramienta que permite una estimación de la vida del dispositivo o equipo de forma previa a su comercialización. La cuantificación de la fiabilidad es crítica para identificar los costos de un determinado periodo de garantía, y para ofrecer a los clientes el nivel de calidad deseado. El objetivo de este Proyecto Fin de Carrera, es el diseño de un sistema automático de instrumentación versátil, para la realización y caracterización de ensayos acelerados, el cual nos sirva para abordar una amplia gama de ensayos con los que evaluar la fiabilidad de los dispositivos electrónicos o equipos. Además del uso industrial donde se evaluará la fiabilidad de forma previa a la comercialización, este sistema se podrá emplear en la docencia de esta área, y fundamentalmente para la realización de ensayos acelerados en investigación de dispositivos electrónicos. La versatilidad de nuestro hardware y aplicación software es un punto a favor, ya que con este sistema de instrumentación se pueden realizar numerosos tipos de ensayos acelerados, sin el problema de tener que cambiar toda la instrumentación, cada vez que se quiera realizar otro ensayo distinto. Los componentes que se elijan para realizar el ensayo acelerado, serán sometidos a un estrés (tensión, corriente, humedad, temperatura…) y se podrá ir observando cómo envejecen, lo que nos permite evaluar la vida del dispositivo en un corto periodo, emulando sus condiciones de trabajo, además de estudiar la fiabilidad también se puede identificar como se degradan sus características principales antes del fallo. El Software utilizado en este Proyecto se ha implementado con un lenguaje de programación gráfico para instrumentación, LabVIEW. La aplicación software se explica de manera muy detallada a lo largo de la memoria, para que su uso y adaptación si fuese necesario no suponga ningún problema para el usuario. En la última parte de esta memoria se encuentra la guía de usuario y un ensayo acelerado planteado como ejemplo. Explicaremos como se han interconectado los equipos a los componentes en los que se va a realizar el ensayo y así se comprobará el correcto funcionamiento del software tomando las medidas necesarias. ABTRACT In all process of development of an electronic device or equipment, we have the need to evaluate the reliability of its components, that is to say, what percentage of equipment that after a certain period of life keeps all of its functionalities within specifications. The evaluation of reliability by means of accelerated tests is the tool that allows an estimation of the lifetime of the device or equipment prior to its marketing. The quantification of reliability is critical to identify the costs of a specific warranty period, and to offer customers the desired quality level. The objective of this Thesis is the design of an automatic very versatile instrument for the realization and characterization of accelerated tests, which will help us to address a wide range of tests to assess the reliability of the devices or electronic equipment. In addition to industrial use where test the reliability before its commercialization, use it can be used in teaching of this area, fundamentally for the realization of accelerated testing in the investigation of electronic devices. The versatility of our hardware and software implementation is a plus, given that this instrumentation system can perform numerous types of accelerated tests, without the problem to have to change everything, every time you want to make another different test. The components that will be chosen to perform the accelerated test, will be subjected to stress (voltage, current, humidity, temperature ...) and you can observe how they age, allowing us to evaluate the life of the device in a short period, emulating their working conditions. In addition to studying the reliability it can also identify how its main characteristics are degraded before failure. The software used in this Thesis has been implemented with a graphical programming language for instrumentation, LabVIEW. This software is explained in great detail throughout the Thesis, so that its use and adaptation, if necessary, will not be a problem for the user. In the last part of this memory we will expose a user guide and test that we have done. We will explain how the equipment has been interconnected to the components in which we are going to perform the test and so we will check the correct operation of the software taking the necessary measures.
Resumo:
Los algoritmos basados en registros de desplazamiento con realimentación (en inglés FSR) se han utilizado como generadores de flujos pseudoaleatorios en aplicaciones con recursos limitados como los sistemas de apertura sin llave. Se considera canal primario a aquel que se utiliza para realizar una transmisión de información. La aparición de los ataques de canal auxiliar (en inglés SCA), que explotan información filtrada inintencionadamente a través de canales laterales como el consumo, las emisiones electromagnéticas o el tiempo empleado, supone una grave amenaza para estas aplicaciones, dado que los dispositivos son accesibles por un atacante. El objetivo de esta tesis es proporcionar un conjunto de protecciones que se puedan aplicar de forma automática y que utilicen recursos ya disponibles, evitando un incremento sustancial en los costes y alargando la vida útil de aplicaciones que puedan estar desplegadas. Explotamos el paralelismo existente en algoritmos FSR, ya que sólo hay 1 bit de diferencia entre estados de rondas consecutivas. Realizamos aportaciones en tres niveles: a nivel de sistema, utilizando un coprocesador reconfigurable, a través del compilador y a nivel de bit, aprovechando los recursos disponibles en el procesador. Proponemos un marco de trabajo que nos permite evaluar implementaciones de un algoritmo incluyendo los efectos introducidos por el compilador considerando que el atacante es experto. En el campo de los ataques, hemos propuesto un nuevo ataque diferencial que se adapta mejor a las condiciones de las implementaciones software de FSR, en las que el consumo entre rondas es muy similar. SORU2 es un co-procesador vectorial reconfigurable propuesto para reducir el consumo energético en aplicaciones con paralelismo y basadas en el uso de bucles. Proponemos el uso de SORU2, además, para ejecutar algoritmos basados en FSR de forma segura. Al ser reconfigurable, no supone un sobrecoste en recursos, ya que no está dedicado en exclusiva al algoritmo de cifrado. Proponemos una configuración que ejecuta múltiples algoritmos de cifrado similares de forma simultánea, con distintas implementaciones y claves. A partir de una implementación sin protecciones, que demostramos que es completamente vulnerable ante SCA, obtenemos una implementación segura a los ataques que hemos realizado. A nivel de compilador, proponemos un mecanismo para evaluar los efectos de las secuencias de optimización del compilador sobre una implementación. El número de posibles secuencias de optimizaciones de compilador es extremadamente alto. El marco de trabajo propuesto incluye un algoritmo para la selección de las secuencias de optimización a considerar. Debido a que las optimizaciones del compilador transforman las implementaciones, se pueden generar automáticamente implementaciones diferentes combinamos para incrementar la seguridad ante SCA. Proponemos 2 mecanismos de aplicación de estas contramedidas, que aumentan la seguridad de la implementación original sin poder considerarse seguras. Finalmente hemos propuesto la ejecución paralela a nivel de bit del algoritmo en un procesador. Utilizamos la forma algebraica normal del algoritmo, que automáticamente se paraleliza. La implementación sobre el algoritmo evaluado mejora en rendimiento y evita que se filtre información por una ejecución dependiente de datos. Sin embargo, es más vulnerable ante ataques diferenciales que la implementación original. Proponemos una modificación del algoritmo para obtener una implementación segura, descartando parcialmente ejecuciones del algoritmo, de forma aleatoria. Esta implementación no introduce una sobrecarga en rendimiento comparada con las implementaciones originales. En definitiva, hemos propuesto varios mecanismos originales a distintos niveles para introducir aleatoridad en implementaciones de algoritmos FSR sin incrementar sustancialmente los recursos necesarios. ABSTRACT Feedback Shift Registers (FSR) have been traditionally used to implement pseudorandom sequence generators. These generators are used in Stream ciphers in systems with tight resource constraints, such as Remote Keyless Entry. When communicating electronic devices, the primary channel is the one used to transmit the information. Side-Channel Attack (SCA) use additional information leaking from the actual implementation, including power consumption, electromagnetic emissions or timing information. Side-Channel Attacks (SCA) are a serious threat to FSR-based applications, as an attacker usually has physical access to the devices. The main objective of this Ph.D. thesis is to provide a set of countermeasures that can be applied automatically using the available resources, avoiding a significant cost overhead and extending the useful life of deployed systems. If possible, we propose to take advantage of the inherent parallelism of FSR-based algorithms, as the state of a FSR differs from previous values only in 1-bit. We have contributed in three different levels: architecture (using a reconfigurable co-processor), using compiler optimizations, and at bit level, making the most of the resources available at the processor. We have developed a framework to evaluate implementations of an algorithm including the effects introduced by the compiler. We consider the presence of an expert attacker with great knowledge on the application and the device. Regarding SCA, we have presented a new differential SCA that performs better than traditional SCA on software FSR-based algorithms, where the leaked values are similar between rounds. SORU2 is a reconfigurable vector co-processor. It has been developed to reduce energy consumption in loop-based applications with parallelism. In addition, we propose its use for secure implementations of FSR-based algorithms. The cost overhead is discarded as the co-processor is not exclusively dedicated to the encryption algorithm. We present a co-processor configuration that executes multiple simultaneous encryptions, using different implementations and keys. From a basic implementation, which is proved to be vulnerable to SCA, we obtain an implementation where the SCA applied were unsuccessful. At compiler level, we use the framework to evaluate the effect of sequences of compiler optimization passes on a software implementation. There are many optimization passes available. The optimization sequences are combinations of the available passes. The amount of sequences is extremely high. The framework includes an algorithm for the selection of interesting sequences that require detailed evaluation. As existing compiler optimizations transform the software implementation, using different optimization sequences we can automatically generate different implementations. We propose to randomly switch between the generated implementations to increase the resistance against SCA.We propose two countermeasures. The results show that, although they increase the resistance against SCA, the resulting implementations are not secure. At bit level, we propose to exploit bit level parallelism of FSR-based implementations using pseudo bitslice implementation in a wireless node processor. The bitslice implementation is automatically obtained from the Algebraic Normal Form of the algorithm. The results show a performance improvement, avoiding timing information leakage, but increasing the vulnerability against differential SCA.We provide a secure version of the algorithm by randomly discarding part of the data obtained. The overhead in performance is negligible when compared to the original implementations. To summarize, we have proposed a set of original countermeasures at different levels that introduce randomness in FSR-based algorithms avoiding a heavy overhead on the resources required.
Resumo:
In current communication systems, there are many new challenges like various competitive standards, the scarcity of frequency resource, etc., especially the development of personal wireless communication systems result the new system update faster than ever before, the conventional hardware-based wireless communication system is difficult to adapt to this situation. The emergence of SDR enabled the third revolution of wireless communication which from hardware to software and build a flexible, reliable, upgradable, reusable, reconfigurable and low cost platform. The Universal Software Radio Peripheral (USRP) products are commonly used with the GNU Radio software suite to create complex SDR systems. GNU Radio is a toolkit where digital signal processing blocks are written in C++, and connected to each other with Python. This makes it easy to develop more sophisticated signal processing systems, because many blocks already written by others and you can quickly put them together to create a complete system. Although the main function of GNU Radio is not be a simulator, but if there is no RF hardware components,it supports to researching the signal processing algorithm based on pre-stored and generated data by signal generator. This thesis introduced SDR platform from hardware (USRP) and software(GNU Radio), as well as some basic modulation techniques in wireless communication system. Based on the examples provided by GNU Radio, carried out some related experiments, for example GSM scanning and FM radio station receiving on USRP. And make a certain degree of improvement based on the experience of some investigators to observe OFDM spectrum and simulate real-time video transmission. GNU Radio combine with USRP hardware proved to be a valuable lab platform for implementing complex radio system prototypes in a short time. RESUMEN. Software Defined Radio (SDR) es una tecnología emergente que está creando un impacto revolucionario en la tecnología de radio convencional. Un buen ejemplo de radio software son los sistemas de código abierto llamados GNU Radio que emplean un kit de herramientas de desarrollo de software libre. En este trabajo se ha empleado un kit de desarrollo comercial (Ettus Research) que consiste en un módulo de procesado de señal y un hardaware sencillo. El módulo emplea un software de desarrollo basado en Linux sobre el que se pueden implementar aplicaciones de radio software muy variadas. El hardware de desarrollo consta de un microprocesador de propósito general, un dispositivo programable (FPGA) y un interfaz de radiofrecuencia que cubre de 50 a 2200MHz. Este hardware se conecta al PC por medio de un interfaz USB de 8Mb/s de velocidad. Sobre la plataforma de Ettus se pueden ejecutar aplicaciones GNU radio que utilizan principalmente lenguaje de programación Python para implementarse. Sin embargo, su módulo de procesado de señal está construido en C + + y emplea un microprocesador con aritmética de coma flotante. Por lo tanto, los desarrolladores pueden rápida y fácilmente construir aplicaciones en tiempo real sistemas de comunicación inalámbrica de alta capacidad. Aunque su función principal no es ser un simulador, si no puesto que hay componentes de hardware RF, Radio GNU sirve de apoyo a la investigación del algoritmo de procesado de señales basado en pre-almacenados y generados por los datos del generador de señal. En este trabajo fin de máster se ha evaluado la plataforma de hardware de DEG (USRP) y el software (GNU Radio). Para ello se han empleado algunas técnicas de modulación básicas en el sistema de comunicación inalámbrica. A partir de los ejemplos proporcionados por GNU Radio, hemos realizado algunos experimentos relacionados, por ejemplo, escaneado del espectro, demodulación de señales de FM empleando siempre el hardware de USRP. Una vez evaluadas aplicaciones sencillas se ha pasado a realizar un cierto grado de mejora y optimización de aplicaciones complejas descritas en la literatura. Se han empleado aplicaciones como la que consiste en la generación de un espectro de OFDM y la simulación y transmisión de señales de vídeo en tiempo real. Con estos resultados se está ahora en disposición de abordar la elaboración de aplicaciones complejas.
Resumo:
En este proyecto, se presenta un informe técnico sobre la cámara Leap Motion y el Software Development Kit correspondiente, el cual es un dispositivo con una cámara de profundidad orientada a interfaces hombre-máquina. Esto es realizado con el propósito de desarrollar una interfaz hombre-máquina basada en un sistema de reconocimiento de gestos de manos. Después de un exhaustivo estudio de la cámara Leap Motion, se han realizado diversos programas de ejemplo con la intención de verificar las capacidades descritas en el informe técnico, poniendo a prueba la Application Programming Interface y evaluando la precisión de las diferentes medidas obtenidas sobre los datos de la cámara. Finalmente, se desarrolla un prototipo de un sistema de reconocimiento de gestos. Los datos sobre la posición y orientación de la punta de los dedos obtenidos de la Leap Motion son usados para describir un gesto mediante un vector descriptor, el cual es enviado a una Máquina Vectores Soporte, utilizada como clasificador multi-clase.
Resumo:
One of the objectives of the European Higher Education Area is the promotion of collaborative and informal learning through the implementation of educational practices. 3D virtual environments become an ideal space for such activities. On the other hand, the problem of financing in Spanish universities has led to the search for new ways to optimize available resources. The Technical University of Madrid requires the use of laboratories which due to their dangerousness, duration or control of the developed processes are difficult to perform in real life. For this reason, we have developed several 3D laboratories in virtual environment. The laboratories are built on open source platform OpenSim. In this paper it is exposed the use of the OpenSim platform for these new teaching experiences and the new design of the software architecture. This architecture requires the adaptation of the platform to the needs of the users and the different laboratories of our University. We will explain the structure of the implemented architecture and the process of creating and configuring it. The proposed architecture is decentralized, each laboratory is housed in different an educational center. The architecture adds several services, among others, the creation and management of users automated, communication between external services and platforms in different program languages. Therefore, we achieve improving the user experience and rising the functionalities of laboratories.
Resumo:
For years, the Human Computer Interaction (HCI) community has crafted usability guidelines that clearly define what characteristics a software system should have in order to be easy to use. However, in the Software Engineering (SE) community keep falling short of successfully incorporating these recommendations into software projects. From a SE perspective, the process of incorporating usability features into software is not always straightforward, as a large number of these features have heavy implications in the underlying software architecture. For example, successfully including an “undo” feature in an application requires the design and implementation of many complex interrelated data structures and functionalities. Our work is focused upon providing developers with a set of software design patterns to assist them in the process of designing more usable software. This would contribute to the proper inclusion of specific usability features with high impact on the software design. Preliminary validation data show that usage of the guidelines also has positive effects on development time and overall software design quality.
Resumo:
Six-port network is an interesting radiofrequency architecture with multiple possibilities. Since it was firstly introduced in the seventies as an alternative network analyzer, the six-port network has been used for many applications, such as homodyne receivers, radar systems, direction of arrival estimation, UWB (Ultra-Wide-Band), or MIMO (Multiple Input Multiple Output) systems. Currently, it is considered as a one of the best candidates to implement a Software Defined Radio (SDR). This thesis comprises an exhaustive study of this promising architecture, where its fundamentals and the state-of-the-art are also included. In addition, the design and development of a SDR 0.3-6 GHz six-port receiver prototype is presented in this thesis, which is implemented in conventional technology. The system is experimentally characterized and validated for RF signal demodulation with good performance. The analysis of the six-port architecture is complemented by a theoretical and experimental comparison with other radiofrequency architectures suitable for SDR. Some novel contributions are introduced in the present thesis. Such novelties are in the direction of the highly topical issues on six-port technique: development and optimization of real-time I-Q regeneration techniques for multiport networks; and search of new techniques and technologies to contribute to the miniaturization of the six-port architecture. In particular, the novel contributions of this thesis can be summarized as: - Introduction of a new real-time auto-calibration method for multiport receivers, particularly suitable for broadband designs and high data rate applications. - Introduction of a new direct baseband I-Q regeneration technique for five-port receivers. - Contribution to the miniaturization of six-port receivers by the use of the multilayer LTCC (Low Temperature Cofired Ceramic) technology. Implementation of a compact (30x30x1.25 mm) broadband (0.3-6 GHz) six-port receiver in LTTC technology. The results and conclusions derived from this thesis have been satisfactory, and quite fruitful in terms of publications. A total of fourteen works have been published, considering international journals and conferences, and national conferences. Aditionally, a paper has been submitted to an internationally recognized journal, which is currently under review.
Resumo:
Las prácticas en laboratorios forman una parte muy importante de la formación en todos los programas docentes. A pesar de esta importancia, la creación de un laboratorio no es una tarea fácil, ya que el hecho de equipar un laboratorio puede suponer un gran gasto económico, tanto inicial como posterior. Como solución, surge la educación a distancia, y en concreto los laboratorios virtuales, es decir, simulaciones de un laboratorio real utilizando modelos matemáticos. Por sus características y flexibilidad se han ido desarrollando laboratorios virtuales en el ámbito docente, pero no todas las áreas cuentan con tantas posibilidades o facilidades como en la electrónica. La mayoría de los laboratorios accesibles desde Internet que hay en la actualidad dentro de la enseñanza a distancia o formación online, son virtuales. El laboratorio que se ha desarrollado tiene como principal ventaja la realización de prácticas controlando instrumentos y circuitos reales de forma remota. El proyecto consiste en realizar un sistema software para implementar un laboratorio remoto en el área de la electrónica analógica, que pueda ser utilizado como complemento a las actividades formativas que se realizan en los laboratorios de los centros de enseñanza. El sistema completo también consta de un hardware controlado mediante buses de comunicación estándar, que permite la implementación de distintos circuitos analógicos, de tal forma que se pueda realizar prácticas sobre circuitos físicos reales. Para desarrollar un laboratorio lo más real posible, la aplicación que maneja el estudiante es un visor 3D. Con la utilización de un visor 3D lo que se pretende es tener un aumento de la realidad a la hora de realizar las prácticas de laboratorio remotamente. El sistema desarrollado cuenta con un sistema de comunicación basado en un modelo cliente-servidor: • Servidor: se encarga de procesar las acciones que realiza el cliente y controla y monitoriza los instrumentos y dispositivos del sistema hardware. • Cliente: sería el usuario final, que mediante un visor 3D comunica las acciones a realizar al servidor para que éste las procese. Practices in laboratories are a very important part of training in all educational programs. Despite this importance, the establishment of a laboratory is not an easy task, since the fact of equipping a laboratory can be a great economic budget, both initial and subsequent spending. As a solution, appears the education at distance (online), and in particular the virtual labs, namely simulations of a real laboratory by using mathematical models. Virtual laboratories in the field of teaching have been developed for its features and flexibility, but not all areas have so many possibilities or facilities as in electronics. The most accessible laboratories from the Internet that are currently accessible within the distance or e-learning (on-line) are virtual. The laboratory which has been developed has as a main advantage to make practices or exercises in the fact of controlling instruments and real circuits remotely. The project consists of making a software system in order to implement a remote laboratory in the area of analog electronics that can be used as a complement to the others training activities to be carried out. The complete system also consists of a controlled hardware by standard communication buses that allow the implementation of several analog circuits, in such a way that practices can control real physical circuits. To develop a laboratory as more realistic as possible, the application that manages the student is a 3D viewer. With the use of a 3D viewer, is intended to have an increase in reality when any student wants to access to laboratory practices remotely. The developed system has a communication system based on a model Client/Server: • Server: The system that handles actions provided by the client and controls and monitors the instruments and devices in the hardware system. • Client: The end user, which using a 3D viewer, communicates the actions to be performed at the server so that it will process them.
Resumo:
A main factor to the success of any organization process improvement effort is the Process Asset Library implementation that provides a central database accessible by anyone at the organization. This repository includes any process support materials to help process deployment. Those materials are composed of organization's standard software process, software process related documentation, descriptions of the software life cycles, guidelines, examples, templates, and any artefacts that the organization considers useful to help the process improvement. This paper describe the structure and contents of the Web-based Process Asset Library for Small businesses and small groups within large organizations. This library is structured using CMMI as reference model in order to implement those Process Areas described by this model.
Resumo:
From the educational point of view, the most widespread method in developing countries is on-site education. Technical and economic resources cannot support conventional distance learning infrastructures and it is even worse for courses in universities. They usually suffer a lack of qualified faculty staff, especially in technical degrees. The literature suggest that e-learning is a suitable solution for this problem, but its methods are developed attending to educational necessities of the First World and cannot be applied directly to other contexts. The proposed methodology is a variant of traditional e-learning adapted to the needs of developing countries. E-learning for Cooperation and Development (c&d-learning) is oriented to be used for educational institutions without adequate technical or human resources. In this paper we describe the c&d-learning implementation architecture based on three main phases: hardware, communication and software; e.g. computer and technical equipping, internet accessing and e-learning platform adaptation. Proper adaptation of educational contents to c&d-learning is discussed and a real case of application in which the authors are involved is described: the Ngozi University at Burundi.
Resumo:
The focus of this paper is to outline the main structure of an alternative software process improvement method for small- and medium-size enterprises. This method is based on the action package concept, which helps to institutionalize the effective practices with affordable implementation costs. This paper also presents the results and lessons learned when this method was applied to three enterprises in the requirements engineering domain.
Resumo:
The acquisition of the information system technologies using the services of an external supplier could be the the best options to reduce the implementation and maintenance cost of software solutions, and allows a company to improve the efficient use of its resources. The focus of this paper is to outline a methodology structure for the software acquisition management. The methodology proposed in this paper is the result of the study and the convergence of the weakness and strengths of some models (CMMI, SA-CMM, ISO/IEC TR 15504, COBIT, and ITIL) that include the software acquisition process.