59 resultados para SOFTWARE QUALITY CLASSIFICATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper present an environmental contingency forecasting tool based on Neural Networks (NN). Forecasting tool analyzes every hour and daily Sulphur Dioxide (SO2) concentrations and Meteorological data time series. Pollutant concentrations and meteorological variables are self-organized applying a Self-organizing Map (SOM) NN in different classes. Classes are used in training phase of a General Regression Neural Network (GRNN) classifier to provide an air quality forecast. In this case a time series set obtained from Environmental Monitoring Network (EMN) of the city of Salamanca, Guanajuato, México is used. Results verify the potential of this method versus other statistical classification methods and also variables correlation is solved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last ten years, Salamanca has been considered among the most polluted cities in México. This paper presents a Self-Organizing Maps (SOM) Neural Network application to classify pollution data and automatize the air pollution level determination for Sulphur Dioxide (SO2) in Salamanca. Meteorological parameters are well known to be important factors contributing to air quality estimation and prediction. In order to observe the behavior and clarify the influence of wind parameters on the SO2 concentrations a SOM Neural Network have been implemented along a year. The main advantages of the SOM is that it allows to integrate data from different sensors and provide readily interpretation results. Especially, it is powerful mapping and classification tool, which others information in an easier way and facilitates the task of establishing an order of priority between the distinguished groups of concentrations depending on their need for further research or remediation actions in subsequent management steps. The results show a significative correlation between pollutant concentrations and some environmental variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tanto los robots autónomos móviles como los robots móviles remotamente operados se utilizan con éxito actualmente en un gran número de ámbitos, algunos de los cuales son tan dispares como la limpieza en el hogar, movimiento de productos en almacenes o la exploración espacial. Sin embargo, es difícil garantizar la ausencia de defectos en los programas que controlan dichos dispositivos, al igual que ocurre en otros sectores informáticos. Existen diferentes alternativas para medir la calidad de un sistema en el desempeño de las funciones para las que fue diseñado, siendo una de ellas la fiabilidad. En el caso de la mayoría de los sistemas físicos se detecta una degradación en la fiabilidad a medida que el sistema envejece. Esto es debido generalmente a efectos de desgaste. En el caso de los sistemas software esto no suele ocurrir, ya que los defectos que existen en ellos generalmente no han sido adquiridos con el paso del tiempo, sino que han sido insertados en el proceso de desarrollo de los mismos. Si dentro del proceso de generación de un sistema software se focaliza la atención en la etapa de codificación, podría plantearse un estudio que tratara de determinar la fiabilidad de distintos algoritmos, válidos para desempeñar el mismo cometido, según los posibles defectos que pudieran introducir los programadores. Este estudio básico podría tener diferentes aplicaciones, como por ejemplo elegir el algoritmo menos sensible a los defectos, para el desarrollo de un sistema crítico o establecer procedimientos de verificación y validación, más exigentes, si existe la necesidad de utilizar un algoritmo que tenga una alta sensibilidad a los defectos. En el presente trabajo de investigación se ha estudiado la influencia que tienen determinados tipos de defectos software en la fiabilidad de tres controladores de velocidad multivariable (PID, Fuzzy y LQR) al actuar en un robot móvil específico. La hipótesis planteada es que los controladores estudiados ofrecen distinta fiabilidad al verse afectados por similares patrones de defectos, lo cual ha sido confirmado por los resultados obtenidos. Desde el punto de vista de la planificación experimental, en primer lugar se realizaron los ensayos necesarios para determinar si los controladores de una misma familia (PID, Fuzzy o LQR) ofrecían una fiabilidad similar, bajo las mismas condiciones experimentales. Una vez confirmado este extremo, se eligió de forma aleatoria un representante de clase de cada familia de controladores, para efectuar una batería de pruebas más exhaustiva, con el objeto de obtener datos que permitieran comparar de una forma más completa la fiabilidad de los controladores bajo estudio. Ante la imposibilidad de realizar un elevado número de pruebas con un robot real, así como para evitar daños en un dispositivo que generalmente tiene un coste significativo, ha sido necesario construir un simulador multicomputador del robot. Dicho simulador ha sido utilizado tanto en las actividades de obtención de controladores bien ajustados, como en la realización de los diferentes ensayos necesarios para el experimento de fiabilidad. ABSTRACT Autonomous mobile robots and remotely operated robots are used successfully in very diverse scenarios, such as home cleaning, movement of goods in warehouses or space exploration. However, it is difficult to ensure the absence of defects in programs controlling these devices, as it happens in most computer sectors. There exist different quality measures of a system when performing the functions for which it was designed, among them, reliability. For most physical systems, a degradation occurs as the system ages. This is generally due to the wear effect. In software systems, this does not usually happen, and defects often come from system development and not from use. Let us assume that we focus on the coding stage in the software development pro¬cess. We could consider a study to find out the reliability of different and equally valid algorithms, taking into account any flaws that programmers may introduce. This basic study may have several applications, such as choosing the algorithm less sensitive to pro¬gramming defects for the development of a critical system. We could also establish more demanding procedures for verification and validation if we need an algorithm with high sensitivity to programming defects. In this thesis, we studied the influence of certain types of software defects in the reliability of three multivariable speed controllers (PID, Fuzzy and LQR) designed to work in a specific mobile robot. The hypothesis is that similar defect patterns affect differently the reliability of controllers, and it has been confirmed by the results. From the viewpoint of experimental planning, we followed these steps. First, we conducted the necessary test to determine if controllers of the same family (PID, Fuzzy or LQR) offered a similar reliability under the same experimental conditions. Then, a class representative was chosen at ramdom within each controller family to perform a more comprehensive test set, with the purpose of getting data to compare more extensively the reliability of the controllers under study. The impossibility of performing a large number of tests with a real robot and the need to prevent the damage of a device with a significant cost, lead us to construct a multicomputer robot simulator. This simulator has been used to obtain well adjusted controllers and to carry out the required reliability experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article shows software that allows determining the statistical behavior of qualitative data originating surveys previously transformed with a Likert’s scale to quantitative data. The main intention is offer to users a useful tool to know statistics' characteristics and forecasts of financial risks in a fast and simple way. Additionally,this paper presents the definition of operational risk. On the other hand, the article explains different techniques to do surveys with a Likert’s scale (Avila, 2008) to know expert’s opinion with the transformation of qualitative data to quantitative data. In addition, this paper will show how is very easy to distinguish an expert’s opinion related to risk, but when users have a lot of surveys and matrices is very difficult to obtain results because is necessary to compare common data. On the other hand, statistical value representative must be extracted from common data to get weight of each risk. In the end, this article exposes the development of “Qualitative Operational Risk Software” or QORS by its acronym, which has been designed to determine the root of risks in organizations and its value at operational risk OpVaR (Jorion, 2008; Chernobai et al, 2008) when input data comes from expert’s opinion and their associated matrices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last century many researches on the business, marketing and technology fields have developed the innovation research line and large amount of knowledge can be found in the literature. Currently, the importance of systematic and openness approaches to manage the available innovation sources is well established in many knowledge fields. Also in the software engineering sector, where the organizations need to absorb and to exploit as much innovative ideas as possible to get success in the current competitive environment. This Master Thesis presents an study related with the innovation sources in the software engineering eld. The main research goals of this work are the identication and the relevance assessment of the available innovation sources and the understanding of the trends on the innovation sources usage. Firstly, a general review of the literature have been conducted in order to define the research area and to identify research gaps. Secondly, the Systematic Literature Review (SLR) has been proposed as the research method in this work to report reliable conclusions collecting systematically quality evidences about the innovation sources in software engineering field. This contribution provides resources, built-on empirical studies included in the SLR, to support a systematic identication and an adequate exploitation of the innovation sources most suitable in the software engineering field. Several artefacts such as lists, taxonomies and relevance assessments of the innovation sources most suitable for software engineering have been built, and their usage trends in the last decades and their particularities on some countries and knowledge fields, especially on the software engineering, have been researched. This work can facilitate to researchers, managers and practitioners of innovative software organizations the systematization of critical activities on innovation processes like the identication and exploitation of the most suitable opportunities. Innovation researchers can use the results of this work to conduct research studies involving the innovation sources research area. Whereas, organization managers and software practitioners can use the provided outcomes in a systematic way to improve their innovation capability, increasing consequently the value creation in the processes that they run to provide products and services useful to their environment. In summary, this Master Thesis research the innovation sources in the software engineering field, providing useful resources to support an effective innovation sources management. Moreover, several aspects should be deeply study to increase the accuracy of the presented results and to obtain more resources built-on empirical knowledge. It can be supported by the INno- vation SOurces MAnagement (InSoMa) framework, which is introduced in this work in order to encourage openness and systematic approaches to identify and to exploit the innovation sources in the software engineering field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 12 January 2010, an earthquake hit the city of Port-au-Prince, capital of Haiti. The earthquake reached a magnitude Mw 7.0 and the epicenter was located near the town of Léogâne, approximately 25 km west of the capital. The earthquake occurred in the boundary region separating the Caribbean plate and the North American plate. This plate boundary is dominated by left-lateral strike slip motion and compression, and accommodates about 20 mm/y slip, with the Caribbean plate moving eastward with respect to the North American plate (DeMets et al., 2000). Initially the location and focal mechanism of the earthquake seemed to involve straightforward accommodation of oblique relative motion between the Caribbean and North American plates along the Enriquillo-Plantain Garden fault system (EPGFZ), however Hayes et al., (2010) combined seismological observations, geologic field data and space geodetic measurements to show that, instead, the rupture process involved slip on multiple faults. Besides, the authors showed that remaining shallow shear strain will be released in future surface-rupturing earthquakes on the EPGFZ. In December 2010, a Spanish cooperation project financed by the Politechnical University of Madrid started with a clear objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. One of the tasks of the project was devoted to vulnerability assessment of the current building stock and the estimation of seismic risk scenarios. The study was carried out by following the capacity spectrum method as implemented in the software SELENA (Molina et al., 2010). The method requires a detailed classification of the building stock in predominant building typologies (according to the materials in the structure and walls, number of stories and age of construction) and the use of the building (residential, commercial, etc.). Later, the knowledge of the soil characteristics of the city and the simulation of a scenario earthquake will provide the seismic risk scenarios (damaged buildings). The initial results of the study show that one of the highest sources of uncertainties comes from the difficulty of achieving a precise building typologies classification due to the craft construction without any regulations. Also it is observed that although the occurrence of big earthquakes usually helps to decrease the vulnerability of the cities due to the collapse of low quality buildings and the reconstruction of seismically designed buildings, in the case of Port-au-Prince the seismic risk in most of the districts remains high, showing very vulnerable areas. Therefore the local authorities have to drive their efforts towards the quality control of the new buildings, the reinforcement of the existing building stock, the establishment of seismic normatives and the development of emergency planning also through the education of the population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is no specialized survey of experiments conducted in the software industry. Goal: Identify the major features of software industry experiments, such as time distribution, independent and dependent variables, subject types, design types and challenges. Method: Systematic literature review, taking the form of a scoping study. Results: We have identified 10 experiments and five quasi-experiments up to July 2012. Most were run as of 2003. The main features of these studies are that they test technologies related to quality and management and analyse outcomes related to effectiveness and effort. Most experiments have a factorial design. The major challenges faced by experimenters are to minimize the cost of running the experiment for the company and to schedule the experiment so as not to interfere with production processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential shown by Lean in different domains has aroused interest in the software industry. However, it remains unclear how Lean can be effectively applied in a domain such as software development that is fundamentally different from manufacturing. This study explores how Lean principles are implemented in software development companies and the challenges that arise when applying Lean Software Development. For that, a case study was conducted at Ericsson R&D Finland, which successfully adopted Scrum in 2009 and subsequently started a comprehensible transition to Lean in 2010. Focus groups were conducted with company representatives to help devise a questionnaire supporting the creation of a Lean mindset in the company (Team Amplifier). Afterwards, the questionnaire was used in 16 teams based in Finland, Hungary and China to evaluate the status of the transformation. By using Lean thinking, Ericsson R&D Finland has made important improvements to the quality of its products, customer satisfaction and transparency within the organization. Moreover, build times have been reduced over ten times and the number of commits per day has increased roughly five times.The study makes two main contributions to research. First, the main factors that have enabled Ericsson R&D?s achievements are analysed. Elements such as ?network of product owners?, ?continuous integration?, ?work in progress limits? and ?communities of practice? have been identified as being of fundamental importance. Second, three categories of challenges in using Lean Software Development were identified: ?achieving flow?, ?transparency? and ?creating a learning culture?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los alimentos son sistemas complejos, formados por diversas estructuras a diferentes escalas: macroscópica y microscópica. Muchas propiedades de los alimentos, que son importantes para su procesamiento, calidad y tratamiento postcosecha, están relacionados con su microestructura. La presente tesis doctoral propone una metodología completa para la determinación de la estructura de alimentos desde un punto de vista multi-escala, basándose en métodos de Resonancia Magnética Nuclear (NMR). Las técnicas de NMR son no invasivas y no destructivas y permiten el estudio tanto de macro- como de microestructura. Se han utilizado distintos procedimientos de NMR dependiendo del nivel que se desea estudiar. Para el nivel macroestructural, la Imagen de Resonancia Magnética (MRI) ha resultado ser muy útil para la caracterización de alimentos. Para el estudio microestructural, la MRI requiere altos tiempos de adquisición, lo que hace muy difícil la transferencia de esta técnica a aplicaciones en industria. Por tanto, la optimización de procedimientos de NMR basados en secuencias relaxometría 2D T1/T2 ha resultado ser una estrategia primordial en esta tesis. Estos protocolos de NMR se han implementado satisfactoriamente por primera vez en alto campo magnético. Se ha caracterizado la microestructura de productos alimentarios enteros por primera vez utilizando este tipo de protocolos. Como muestras, se han utilizado dos tipos de productos: modelos de alimentos y alimentos reales (manzanas). Además, como primer paso para su posterior implementación en la industria agroalimentaria, se ha mejorado una línea transportadora, especialmente diseñada para trabajar bajo condiciones de NMR en trabajos anteriores del grupo LPF-TAGRALIA. Se han estudiado y seleccionado las secuencias más rápidas y óptimas para la detección de dos tipos de desórdenes internos en manzanas: vitrescencia y roturas internas. La corrección de las imágenes en movimiento se realiza en tiempo real. Asimismo, se han utilizado protocolos de visión artificial para la clasificación automática de manzanas potencialmente afectadas por vitrescencia. El presente documento está dividido en diferentes capítulos: el Capítulo 2 explica los antecedentes de la presente tesis y el marco del proyecto en el que se ha desarrollado. El Capítulo 3 recoge el estado del arte. El Capítulo 4 establece los objetivos de esta tesis doctoral. Los resultados se dividen en cinco sub-secciones (dentro del Capítulo 5) que corresponden con los trabajos publicados bien en revistas revisadas por pares, bien en congresos internacionales o bien como capítulos de libros revisados por pares. La Sección 5.1. es un estudio del desarrollo de la vitrescencia en manzanas mediante MRI y lo relaciona con la posición de la fruta dentro de la copa del árbol. La Sección 5.2 presenta un trabajo sobre macro- y microestructura en modelos de alimentos. La Sección 5.3 es un artículo en revisión en una revista revisada por pares, en el que se hace un estudio microestrcutural no destructivo mediante relaxometría 2D T1/T2. la Sección 5.4, hace una comparación entre manzanas afectadas por vitrescencia mediante dos técnicas: tomografía de rayos X e MRI, en manzana. Por último, en la Sección 5.5 se muestra un trabajo en el que se hace un estudio de secuencias de MRI en línea para la evaluación de calidad interna en manzanas. Los siguientes capítulos ofrecen una discusión y conclusiones (Capítulo 6 y 7 respectivamente) de todos los capítulos de esta tesis doctoral. Finalmente, se han añadido tres apéndices: el primero con una introducción de los principios básicos de resonancia magnética nuclear (NMR) y en los otros dos, se presentan sendos estudios sobre el efecto de las fibras en la rehidratación de cereales de desayuno extrusionados, mediante diversas técnicas. Ambos trabajos se presentaron en un congreso internacional. Los resultados más relevantes de la presente tesis doctoral, se pueden dividir en tres grandes bloques: resultados sobre macroestructura, resultados sobre microestructura y resultados sobre MRI en línea. Resultados sobre macroestructura: - La imagen de resonancia magnética (MRI) se aplicó satisfactoriamente para la caracterización de macroestructura. En particular, la reconstrucción 3D de imágenes de resonancia magnética permitió identificar y caracterizar dos tipos distintos de vitrescencia en manzanas: central y radial, que se caracterizan por el porcentaje de daño y la conectividad (número de Euler). - La MRI proveía un mejor contraste para manzanas afectadas por vitrescencia que las imágenes de tomografía de rayos X (X-Ray CT), como se pudo verificar en muestras idénticas de manzana. Además, el tiempo de adquisición de la tomografía de rayos X fue alrededor de 12 veces mayor (25 minutos) que la adquisición de las imágenes de resonancia magnética (2 minutos 2 segundos). Resultados sobre microestructura: - Para el estudio de microestructura (nivel subcelular) se utilizaron con éxito secuencias de relaxometría 2D T1/T2. Estas secuencias se usaron por primera vez en alto campo y sobre piezas de alimento completo, convirtiéndose en una forma no destructiva de llevar a cabo estudios de microestructura. - El uso de MRI junto con relaxometría 2D T1/T2 permite realizar estudios multiescala en alimentos de forma no destructiva. Resultados sobre MRI en línea: - El uso de imagen de resonancia magnética en línea fue factible para la identificación de dos tipos de desórdenes internos en manzanas: vitrescencia y podredumbre interna. Las secuencias de imagen tipo FLASH resultaron adecuadas para la identificación en línea de vitrescencia en manzanas. Se realizó sin selección de corte, debido a que la vitrescencia puede desarrollarse en cualquier punto del volumen de la manzana. Se consiguió reducir el tiempo de adquisición, de modo que se llegaron a adquirir 1.3 frutos por segundos (758 ms por fruto). Las secuencias de imagen tipo UFLARE fueron adecuadas para la detección en línea de la podredumbre interna en manzanas. En este caso, se utilizó selección de corte, ya que se trata de un desorden que se suele localizar en la parte central del volumen de la manzana. Se consiguió reducir el tiempo de adquisicón hasta 0.67 frutos por segundo (1475 ms por fruto). En ambos casos (FLASH y UFLARE) fueron necesarios algoritmos para la corrección del movimiento de las imágenes en tiempo real. ABSTRACT Food is a complex system formed by several structures at different scales: macroscopic and microscopic. Many properties of foods that are relevant to process engineering or quality and postharvest treatments are related to their microstructure. This Ph.D Thesis proposes a complete methodology for food structure determination, in a multiscale way, based on the Nuclear Magnetic Resonance (NMR) phenomenon since NMR techniques are non-invasive and non-destructive, and allow both, macro- and micro-structure study. Different NMR procedures are used depending on the structure level under study. For the macrostructure level, Magnetic Resonance Imaging (MRI) revealed its usefulness for food characterization. For microstructure insight, MRI required high acquisition times, which is a hindrance for transference to industry applications. Therefore, optimization of NMR procedures based on T1/T2 relaxometry sequences was a key strategy in this Thesis. These NMR relaxometry protocols, are successfully implemented in high magnetic field. Microstructure of entire food products have been characterized for the first time using these protocols. Two different types of food products have been studied: food models and actual food (apples). Furthermore, as a first step for the food industry implementation, a grading line system, specially designed for working under NMR conditions in previous works of the LPF-TAGRALIA group, is improved. The study and selection of the most suitable rapid sequence to detect two different types of disorders in apples (watercore and internal breakdown) is performed and the real time image motion correction is applied. In addition, artificial vision protocols for the automatic classification of apples potentially affected by watercore are applied. This document is divided into seven different chapters: Chapter 2 explains the thesis background and the framework of the project in which it has been worked. Chapter 3 comprises the state of the art. Chapter 4 establishes de objectives of this Ph.D thesis. The results are divided into five different sections (in Chapter 5) that correspond to published peered reviewed works. Section 5.1 assesses the watercore development in apples with MRI and studies the effect of fruit location in the canopy. Section 5.2 is an MRI and 2D relaxometry study for macro- and microstructure assessment in food models. Section 5.3 is a non-destructive microstructural study using 2D T1/T2 relaxometry on watercore affected apples. Section 5.4 makes a comparison of X-ray CT and MRI on watercore disorder of different apple cultivars. Section 5.5, that is a study of online MRI sequences for the evaluation of apple internal quality. The subsequent chapters offer a general discussion and conclusions (Chapter 6 and Chapter 7 respectively) of all the works performed in the frame of this Ph.D thesis (two peer reviewed journals, one book chapter and one international congress).Finally, three appendices are included in which an introduction to NMR principles is offered and two published proceedings regarding the effect of fiber on the rehydration of extruded breakfast cereal are displayed. The most relevant results can be summarized into three sections: results on macrostructure, results on microstructure and results on on-line MRI. Results on macrostructure: - MRI was successfully used for macrostructure characterization. Indeed, 3D reconstruction of MRI in apples allows to identify two different types of watercore (radial and block), which are characterized by the percentage of damage and the connectivity (Euler number). - MRI provides better contrast for watercore than X-Ray CT as verified on identical samples. Furthermore, X-Ray CT images acquisition time was around 12 times higher (25 minutes) than MRI acquisition time (2 minutes 2 seconds). Results on microstructure: - 2D T1/T2 relaxometry were successfully applied for microstructure (subcellular level) characterization. 2D T1/T2 relaxometry sequences have been applied for the first time on high field for entire food pieces, being a non-destructive way to achieve microstructure study. - The use of MRI together with 2D T1/T2 relaxometry sequences allows a non-destructive multiscale study of food. Results on on-line MRI: - The use of on-line MRI was successful for the identification of two different internal disorders in apples: watercore and internal breakdown. FLASH imaging was a suitable technique for the on-line detection of watercore disorder in apples, with no slice selection, since watercore is a physiological disorder that may be developed anywhere in the apple volume. 1.3 fruits were imaged per second (768 ms per fruit). UFLARE imaging is a suitable sequence for the on-line detection of internal breakdown disorder in apples. Slice selection was used, as internal breakdown is usually located in the central slice of the apple volume. 0.67 fruits were imaged per second (1475 ms per fruit). In both cases (FLASH and UFLARE) motion correction was performed in real time, during the acquisition of the images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La documentación en los proyectos software presenta una serie de problemas que afectan a la calidad del producto y al proceso software. Con frecuencia, la documentación es considerada sólo como un volumen adicional de información disponible para la organización y el equipo de desarrollo, que ralentiza la ejecución del proyecto. En este sentido, el papel de la documentación en un proyecto se concibe como una de las actividades más costosas y que consumen más tiempo y que luego no se utiliza de forma extensiva. La documentación queda, en muchos casos, relegada a un segundo plano y es poco valorada frente a la comunicación cara a cara. Existe además una relación entre la calidad de la documentación y el proceso software. Se plantean dificultades en la adopción de buenas prácticas de proceso software y el impacto del exceso de documentación percibido por parte de los gestores de proyectos en los que se quiere abordad un programa de mejora del proceso software. Recordemos que la calidad de la documentación está muy relacionada con la utilización de la misma que puedan hacer los desarrolladores. Esta tesis aborda el problema planteando un cambio de punto de vista en el proceso software, en el que la documentación pasa de ser un sub producto de las actividades y tareas del proceso a ser el elemento que vertebra el propio proceso. En este nuevo punto de vista, la definición de los propios documentos, sus propiedades y relaciones permiten establecer y guiar procesos software de cualquier tipo. Para ello, desarrolla un metamodelo para definición de metodologías centradas en documentos. Este metamodelo se confronta con una serie atributos de calidad de la documentación software para comprobar que existe una mejora sobre estos, y, por consiguiente se mejora la calidad de la documentación software en general. Por último se utiliza este metamodelo centrado en documentos para describir una metodología ágil (Scrum) y validar la capacidad y flexibilidad del metamodelo utilizando el cambio de punto de vista sobre el proceso software planteado en esta tesis. ABSTRACT The documentation in software projects has a number of problems affecting the quality of the product and the software process. Often, documentation is considered only as an additional volume of information available to the organization and development team, which slows project execution. In this sense, the role of documentation in a project is conceived as one of the most expensive activities and more time-consuming and then not used extensively. The documentation is, in many cases, relegated to the background and is undervalued compared to face-to-face communication. There is also a relationship between the quality of the documentation and software process. There are difficulties in adopting good practices of software process and the impact of excess documentation perceived by project managers in Software Process Improvement activities. We have to remember that quality of the documentation is closely related to the use of it that can make developers. This thesis addresses the problem considering a change of view on the software process, in which the documentation happens to be a by-product of the activities and tasks of the process to be the element that structures the process itself. Through this new view, the definition of the documents themselves, their properties and relationships, allow us to establish processes and guidance for develop software of any kind. To achieve this target, a metamodel for defining document-centric methodologies has been developed. This metamodel confronts a number of quality attributes software documentation to prove that there is an improvement on these, and therefore the quality of the software documentation is improved. Finally this document-centric metamodel is used to describe an agile methodology (Scrum) to validate the capability and flexibility of the metamodel, using the proposed change of view on the software process described in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strict technical quality assurance procedures are essential for PV plant bankability. When large-scale PV plants are concerned, this is typically accomplished in three consecutive phases: an energy yield forecast, that is performed at the beginning of the project and is typically accomplished by means of a simulation exercise performed with dedicated software; a reception test campaign, that is performed at the end of the commissioning and consists of a set of tests for determining the efficiency and the reliability of the PV plant devices; and a performance analysis of the first years of operation, that consists in comparing the real energy production with the one calculated from the recorded operating conditions and taking into account the maintenance records. In the last six years, IES-UPM has offered both indoor and on-site quality control campaigns for more than 60 PV plants, with an accumulated power of more than 300 MW, in close contact with Engineering, Procurement and Construction Contractors and financial entities. This paper presents the lessons learned from such experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BioMet®Phon is a software application developed for the characterization of voice in voice quality evaluation. Initially it was conceived as plain research code to estimate the glottal source from voice and obtain the biomechanical parameters of the vocal folds from the spectral density of the estimate. This code grew to what is now the Glottex®Engine package (G®E). Further demands from users in laryngology and speech therapy fields instantiated the development of a specific Graphic User Interface (GUI’s) to encapsulate user interaction with the G®E. This gave place to BioMet®Phon, an application which extracts the glottal source from voice and offers a complete parameterization of this signal, including distortion, cepstral, spectral, biomechanical, time domain, contact and tremor parameters. The semantic capabilities of biomechanical parameters are discussed. Study cases from its application to the field of laryngology and speech therapy are given and discussed. Validation results in voice pathology detection are also presented. Applications to laryngology, speech therapy, and monitoring neurological deterioration in the elder are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decades, software systems have become an intrinsic element in our daily lives. Software exists in our computers, in our cars, and even in our refrigerators. Today’s world has become heavily dependent on software and yet, we still struggle to deliver quality software products, on-time and within budget. When searching for the causes of such alarming scenario, we find concurrent voices pointing to the role of the project manager. But what is project management and what makes it so challenging? Part of the answer to this question requires a deeper analysis of why software project managers have been largely ineffective. Answering this question might assist current and future software project managers in avoiding, or at least effectively mitigating, problematic scenarios that, if unresolved, will eventually lead to additional failures. This is where anti-patterns come into play and where they can be a useful tool in identifying and addressing software project management failure. Unfortunately, anti-patterns are still a fairly recent concept, and thus, available information is still scarce and loosely organized. This thesis will attempt to help remedy this scenario. The objective of this work is to help organize existing, documented software project management anti-patterns by answering our two research questions: · What are the different anti-patterns in software project management? · How can these anti-patterns be categorized?