896 resultados para Software testing. Test generation. Grammars
Resumo:
En la actualidad, el uso de las tecnologías ha sido primordial para el avance de las sociedades, estas han permitido que personas sin conocimientos informáticos o usuarios llamados “no expertos” se interesen en su uso, razón por la cual los investigadores científicos se han visto en la necesidad de producir estudios que permitan la adaptación de sistemas, a la problemática existente dentro del ámbito informático. Una necesidad recurrente de todo usuario de un sistema es la gestión de la información, la cual se puede administrar por medio de una base de datos y lenguaje específico, como lo es el SQL (Structured Query Language), pero esto obliga al usuario sin conocimientos a acudir a un especialista para su diseño y construcción, lo cual se ve reflejado en costos y métodos complejos, entonces se plantea una pregunta ¿qué hacer cuando los proyectos son pequeñas y los recursos y procesos son limitados? Teniendo como base la investigación realizada por la universidad de Washington[39], donde sintetizan sentencias SQL a partir de ejemplos de entrada y salida, se pretende con esta memoria automatizar el proceso y aplicar una técnica diferente de aprendizaje, para lo cual utiliza una aproximación evolucionista, donde la aplicación de un algoritmo genético adaptado origina sentencias SQL válidas que responden a las condiciones establecidas por los ejemplos de entrada y salida dados por el usuario. Se obtuvo como resultado de la aproximación, una herramienta denominada EvoSQL que fue validada en este estudio. Sobre los 28 ejercicios empleados por la investigación [39], 23 de los cuales se obtuvieron resultados perfectos y 5 ejercicios sin éxito, esto representa un 82.1% de efectividad. Esta efectividad es superior en un 10.7% al establecido por la herramienta desarrollada en [39] SQLSynthesizer y 75% más alto que la herramienta siguiente más próxima Query by Output QBO[31]. El promedio obtenido en la ejecución de cada ejercicio fue de 3 minutos y 11 segundos, este tiempo es superior al establecido por SQLSynthesizer; sin embargo, en la medida un algoritmo genético supone la existencia de fases que amplían los rangos de tiempos, por lo cual el tiempo obtenido es aceptable con relación a las aplicaciones de este tipo. En conclusión y según lo anteriormente expuesto, se obtuvo una herramienta automática con una aproximación evolucionista, con buenos resultados y un proceso simple para el usuario “no experto”.
Resumo:
Introducción: El programa Centros Infantiles de Cultura Productiva (CICP), es propio de Tocancipá, a cargo de la Gerencia de Desarrollo Social Municipal, consiste en centros de atención a niños de 5 a 12 años, SISBEN 1 y 2 que reciben apoyo nutricional y en tareas escolares, dentro de las iniciativas mundiales de desarrollo del Milenio, el Plan Nacional de Desarrollo 2006-2010 y el Plan de Desarrollo Municipal 2008-2011. Metodología: Estudio pre-experimental postest con grupo de comparación para evaluar estado nutricional, de niños del programa CICP en la zona centro del municipio en 2010, evaluó 286 escolares con indicadores antropométricos Talla/Edad, IMC/Edad según los patrones OMS 2007. Clasificación nutricional determinada mediante z score con el software Anthro Plus, prueba t para poblaciones independientes, en software SPSS 15. Resultados: Prevalencia de riesgo de talla baja para la edad 28,7%, de talla baja para la edad 12,9%, sobrepeso 12,6%, obesos 2,4%, riesgo de delgadez 15% p y delgadez el 2,8%, en el total de las observaciones. Análisis de menú ofrecido en el programa CICP en un día de observación evidencia aporte del 100% de requerimientos diarios, sin considerar el consumo de alimentos de cada niño en el hogar. Conclusiones: No existe diferencia estadísticamente significativa entre los grupos. La prevalencia de obesidad en el grupo beneficiario del CICP es 3,9% vs. 1,3% en grupo control, relacionado con aportes nutricionales, lo que contrasta con la proporción de delgadez del 4,7% en grupo CICP vs 1,3% en grupo control. Es necesario realizar un estudio de seguimiento para confirmar hallazgos del presente estudio.
Resumo:
Aspect-oriented programming (AOP) is a promising technology that supports separation of crosscutting concerns (i.e., functionality that tends to be tangled with, and scattered through the rest of the system). In AOP, a method-like construct named advice is applied to join points in the system through a special construct named pointcut. This mechanism supports the modularization of crosscutting behavior; however, since the added interactions are not explicit in the source code, it is hard to ensure their correctness. To tackle this problem, this paper presents a rigorous coverage analysis approach to ensure exercising the logic of each advice - statements, branches, and def-use pairs - at each affected join point. To make this analysis possible, a structural model based on Java bytecode - called PointCut-based Del-Use Graph (PCDU) - is proposed, along with three integration testing criteria. Theoretical, empirical, and exploratory studies involving 12 aspect-oriented programs and several fault examples present evidence of the feasibility and effectiveness of the proposed approach. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Este trabalho tem como objetivo estudar e avaliar técnicas para a aceleração de algoritmos de análise de timing funcional (FTA - Functional Timing Analysis) baseados em geração automática de testes (ATPG – Automatic Test Generation). Para tanto, são abordados três algoritmos conhecidos : algoritmo-D, o PODEM e o FAN. Após a análise dos algoritmos e o estudo de algumas técnicas de aceleração, é proposto o algoritmo DETA (Delay Enumeration-Based Timing Analysis) que determina o atraso crítico de circuitos que contêm portas complexas. O DETA está definido como um algoritmo baseado em ATPG com sensibilização concorrente de caminhos. Na implementação do algoritmo, foi possível validar o modelo de computação de atrasos para circuitos que contêm portas complexas utilizando a abordagem de macro-expansão implícita. Além disso, alguns resultados parciais demonstram que, para alguns circuitos, o DETA apresenta uma pequena dependência do número de entradas quando comparado com a dependência no procedimento de simulação. Desta forma, é possível evitar uma pesquisa extensa antes de se encontrar o teste e assim, obter sucesso na aplicação de métodos para aceleração do algoritmo.
Resumo:
Background: High plasma uric acid (UA) is a prerequisite for gout and is also associated with the metabolic syndrome and its components and consequently risk factors for cardiovascular diseases. Hence, the management of UA serum concentrations would be essential for the treatment and/or prevention of human diseases and, to that end, it is necessary to know what the main factors that control the uricemia increase. The aim of this study was to evaluate the main factors associated with higher uricemia values analyzing diet, body composition and biochemical markers. Methods. 415 both gender individuals aged 21 to 82 years who participated in a lifestyle modification project were studied. Anthropometric evaluation consisted of weight and height measurements with later BMI estimation. Waist circumference was also measured. The muscle mass (Muscle Mass Index - MMI) and fat percentage were measured by bioimpedance. Dietary intake was estimated by 24-hour recalls with later quantification of the servings on the Brazilian food pyramid and the Healthy Eating Index. Uric acid, glucose, triglycerides (TG), total cholesterol, urea, creatinine, gamma-GT, albumin and calcium and HDL-c were quantified in serum by the dry-chemistry method. LDL-c was estimated by the Friedewald equation and ultrasensitive C-reactive protein (CRP) by the immunochemiluminiscence method. Statistical analysis was performed by the SAS software package, version 9.1. Linear regression (odds ratio) was performed with a 95% confidence interval (CI) in order to observe the odds ratio for presenting UA above the last quartile (♂UA > 6.5 mg/dL and ♀ UA > 5 mg/dL). The level of significance adopted was lower than 5%. Results: Individuals with BMI ≥ 25 kg/m§ssup§2§esup§ OR = 2.28(1.13-4.6) and lower MMI OR = 13.4 (5.21-34.56) showed greater chances of high UA levels even after all adjustments (gender, age, CRP, gamma-gt, LDL, creatinine, urea, albumin, HDL-c, TG, arterial hypertension and glucose). As regards biochemical markers, higher triglycerides OR = 2.76 (1.55-4.90), US-CRP OR = 2.77 (1.07-7.21) and urea OR = 2.53 (1.19-5.41) were associated with greater chances of high UA (adjusted for gender, age, BMI, waist circumference, MMI, glomerular filtration rate, and MS). No association was found between diet and UA. Conclusions: The main factors associated with UA increase were altered BMI (overweight and obesity), muscle hypotrophy (MMI), higher levels of urea, triglycerides, and CRP. No dietary components were found among uricemia predictors. © 2013 de Oliveira et al.; licensee BioMed Central Ltd.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
This project points out a brief overview of several concepts, as Renewable Energy Resources, Distributed Energy Resources, Distributed Generation, and describes the general architecture of an electrical microgrid, isolated or connected to the Medium Voltage Network. Moreover, the project focuses on a project carried out by GRECDH Department in collaboration with CITCEA Department, both belonging to Universitat Politécnica de Catalunya: it concerns isolated microgrids employing renewable energy resources in two communities in northern Peru. Several solutions found using optimization software regarding different generation systems (wind and photovoltaic) and different energy demand scenarios are commented and analyzed from an electrical point of view. Furthermore, there are some proposals to improve microgrid performances, in particular to increase voltage values for each load connected to the microgrid. The extra costs required by the proposed solutions are calculated and their effect on the total microgrid cost are taken into account; finally there are some considerations about the impact the project has on population and on people's daily life.
Resumo:
El proyecto, “Aplicaciones de filtrado adaptativo LMS para mejorar la respuesta de acelerómetros”, se realizó con el objetivo de eliminar señales no deseadas de la señal de información procedentes de los acelerómetros para aplicaciones automovilísticas, mediante los algoritmos de los filtros adaptativos LMS. Dicho proyecto, está comprendido en tres áreas para su realización y ejecución, los cuales fueron ejecutados desde el inicio hasta el último día de trabajo. En la primera área de aplicación, diseñamos filtros paso bajo, paso alto, paso banda y paso banda eliminada, en lo que son los filtros de butterworth, filtros Chebyshev, de tipo uno como de tipo dos y filtros elípticos. Con esta primera parte, lo que se quiere es conocer, o en nuestro caso, recordar el entorno de Matlab, en sus distintas ecuaciones prediseñadas que nos ofrece el mencionado entorno, como también nos permite conocer un poco las características de estos filtros. Para posteriormente probar dichos filtros en el DSP. En la segunda etapa, y tras recordar un poco el entorno de Matlab, nos centramos en la elaboración y/o diseño de nuestro filtro adaptativo LMS; experimentado primero con Matlab, para como ya se dijo, entender y comprender el comportamiento del mismo. Cuando ya teníamos claro esta parte, procedimos a “cargar” el código en el DSP, compilarlo y depurarlo, realizando estas últimas acciones gracias al Visual DSP. Resaltaremos que durante esta segunda etapa se empezó a excitar las entradas del sistema, con señales provenientes del Cool Edit Pro, y además para saber cómo se comportaba el filtro adaptativo LMS, se utilizó señales provenientes de un generador de funciones, para obtener de esta manera un desfase entre las dos señales de entrada; aunque también se utilizó el propio Cool Edit Pro para obtener señales desfasadas, pero debido que la fase tres no podíamos usar el mencionado software, realizamos pruebas con el generador de funciones. Finalmente, en la tercera etapa, y tras comprobar el funcionamiento deseado de nuestro filtro adaptativo DSP con señales de entrada simuladas, pasamos a un laboratorio, en donde se utilizó señales provenientes del acelerómetro 4000A, y por supuesto, del generador de funciones; el cual sirvió para la formación de nuestra señal de referencia, que permitirá la eliminación de una de las frecuencias que se emitirá del acelerómetro. Por último, cabe resaltar que pudimos obtener un comportamiento del filtro adaptativo LMS adecuado, y como se esperaba. Realizamos pruebas, con señales de entrada desfasadas, y obtuvimos curiosas respuestas a la salida del sistema, como son que la frecuencia a eliminar, mientras más desfasado estén estas señales, mas se notaba. Solucionando este punto al aumentar el orden del filtro. Finalmente podemos concluir que pese a que los filtros digitales probados en la primera etapa son útiles, para tener una respuesta lo más ideal posible hay que tener en cuenta el orden del filtro, el cual debe ser muy alto para que las frecuencias próximas a la frecuencia de corte, no se atenúen. En cambio, en los filtros adaptativos LMS, si queremos por ejemplo, eliminar una señal de entre tres señales, sólo basta con introducir la frecuencia a eliminar, por una de las entradas del filtro, en concreto la señal de referencia. De esta manera, podemos eliminar una señal de entre estas tres, de manera que las otras dos, no se vean afectadas por el procedimiento. Abstract The project, "LMS adaptive filtering applications to improve the response of accelerometers" was conducted in order to remove unwanted signals from the information signal from the accelerometers for automotive applications using algorithms LMS adaptive filters. The project is comprised of three areas for implementation and execution, which were executed from the beginning until the last day. In the first area of application, we design low pass filters, high pass, band pass and band-stop, as the filters are Butterworth, Chebyshev filters, type one and type two and elliptic filters. In this first part, what we want is to know, or in our case, remember the Matlab environment, art in its various equations offered by the mentioned environment, as well as allows us to understand some of the characteristics of these filters. To further test these filters in the DSP. In the second stage, and recalling some Matlab environment, we focus on the development and design of our LMS adaptive filter; experimented first with Matlab, for as noted above, understand the behavior of the same. When it was clear this part, proceeded to "load" the code in the DSP, compile and debug, making these latest actions by the Visual DSP. Will highlight that during this second stage began to excite the system inputs, with signals from the Cool Edit Pro, and also for how he behaved the LMS adaptive filter was used signals from a function generator, to thereby obtain a gap between the two input signals, but also used Cool Edit Pro himself for phase signals, but due to phase three could not use such software, we test the function generator. Finally, in the third stage, and after checking the desired performance of our DSP adaptive filter with simulated input signals, we went to a laboratory, where we used signals from the accelerometer 4000A, and of course, the function generator, which was used for the formation of our reference signal, enabling the elimination of one of the frequencies to be emitted from the accelerometer. Note that they were able to obtain a behavior of the LMS adaptive filter suitable as expected. We test with outdated input signals, and got curious response to the output of the system, such as the frequency to remove, the more outdated are these signs, but noticeable. Solving this point with increasing the filter order. We can conclude that although proven digital filters in the first stage are useful, to have a perfect answer as possible must be taken into account the order of the filter, which should be very high for frequencies near the frequency cutting, not weakened. In contrast, in the LMS adaptive filters if we for example, remove a signal from among three signals, only enough to eliminate the frequency input on one of the inputs of the filter, namely the reference signal. Thus, we can remove a signal between these three, so that the other two, not affected by the procedure.
Resumo:
El proceso de captura de requisitos constituye un proceso con connotaciones sociales relacionadas con diferentes personas (stakeholders), una circunstancia que hace que ciertos problemas se presenten cuando se lleva adelante el proceso de conceptualización de requisitos. Se propone un proceso de conceptualización de requisitos que se estructura en dos fases: (a) Análisis Orientado a al Problema: cuyo objetivo es comprender el problema dado por el usuario en el dominio en el que este se lleva a cabo, y (b) Análisis de Orientado al Producto: cuyo objetivo es obtener las funcionalidades que el usuario espera del producto de software a desarrollar, teniendo en cuenta la relación de estas con la realidad expresada por el usuario en su discurso. Se proponen seis técnicas que articulan cada una de las tareas que componen las fases de proceso propuesto.
Resumo:
Varios grupos de la Universidad Politécnica de Madrid se encuentran actualmente desarrollando un micro-satélite de experimentación bajo el proyecto UPMSat-2, sucesor de otro exitoso proyecto similar, el UPM-Sat 1. Bajo este marco la autora del presente documento ha llevado a cabo la realización de tres tareas fundamentales para hacer posible la puesta en órbita de dicho satélite. Las tareas principales definidas como alcance de este proyecto pretenden facilitar el uso de la memoria no volátil del computador de a bordo y comprobar el funcionamiento de todos los sistemas del satélite. Por ello se ha realizado el arranque desde la memoria no volátil junto con un manejador para el uso de la misma y un conjunto de pruebas de validación del software e integración del hardware. La satisfacción con los resultados obtenidos ha hecho posible la inclusión del software y pruebas desarrolladas al conjunto de todo el software del proyecto UPMSat-2, contribuyendo así a la capacidad del satélite para ser puesto en órbita.---ABSTRACT---UPMSat-2, the successor of UPM-Sat 1, is a joint project for the development of a micro-satellite for experimentation, which is being carried out by various research groups at Universidad Politécnica de Madrid. The author of this document has developed three main tasks to make possible the correct operation of this satellite during the duration of its mission. The scope of the present work is to enable the use of the on-board computer’s non-volatile memory and the development of a software to test that the satellite’s subsystems are working properly. To this end, the non-volatile memory’s boot sequence has been implemented together with the driver to use such memory, and a series of validation and integration tests for the software and the hardware. The results of the this work have been satisfactory, therefore they have been included in UPMSat-2’s software, contributing this way to the capacity of the satellite to carry out its mission.
Resumo:
The molecular clock does not tick at a uniform rate in all taxa but maybe influenced by species characteristics. Eusocial species (those with reproductive division of labor) have been predicted to have faster rates of molecular evolution than their nonsocial relatives because of greatly reduced effective population size; if most individuals in a population are nonreproductive and only one or few queens produce all the offspring, then eusocial animals could have much lower effective population sizes than their solitary relatives, which should increase the rate of substitution of nearly neutral mutations. An earlier study reported faster rates in eusocial honeybees and vespid wasps but failed to correct for phylogenetic nonindependence or to distinguish between potential causes of rate variation. Because sociality has evolved independently in many different lineages, it is possible to conduct a more wide-ranging study to test the generality of the relationship. We have conducted a comparative analysis of 25 phylogenetically independent pairs of social lineages and their nonsocial relatives, including bees, wasps, ants, termites, shrimps, and mole rats, using a range of available DNA sequences (mitochondrial and nuclear DNA coding for proteins and RNAs, and nontranslated sequences). By including a wide range of social taxa, we were able to test whether there is a general influence of sociality on rates of molecular evolution and to test specific predictions of the hypothesis: (1) that social species have faster rates because they have reduced effective population sizes; (2) that mitochondrial genes would show a greater effect of sociality than nuclear genes; and (3) that rates of molecular evolution should be correlated with the degree of sociality. We find no consistent pattern in rates of molecular evolution between social and nonsocial lineages and no evidence that mitochondrial genes show faster rates in social taxa. However, we show that the most highly eusocial Hymenoptera do have faster rates than their nonsocial relatives. We also find that social parasites (that utilize the workers from related species to produce their own offspring) have faster rates than their social relatives, which is consistent with an effect of lower effective population size on rate of molecular evolution. Our results illustrate the importance of allowing for phylogenetic nonindependence when conducting investigations of determinants of variation in rate of molecular evolution.
Resumo:
It is presented a research on the application of a collaborative learning and authoring during all delivery phases of e-learning programmes or e-courses offered by educational institutions. The possibilities for modelling of an e-project as a specific management process based on planned, dynamically changing or accidentally arising sequences of learning activities, is discussed. New approaches for project-based and collaborative learning and authoring are presented. Special types of test questions are introduced which allow test generation and authoring based on learners’ answers accumulated in the frame of given e-course. Experiments are carried out in an e-learning environment, named BEST.
Investigation of factors influencing loyalty – The role of involvement, perceived risk and knowledge
Resumo:
Our research aimed to reveal the effects that can be observed during the buying process of food products and can influence the decisions of customers. We focused on the role of enduring involvement in customers’ behavioural loyalty, that is, the repurchase of food brands. To understand this relationship in a more sophisticated way, we involved two mediating constructs in our conceptual model: perceived risk and perceived knowledge of food products. The data collection was carried out among undergraduate students in frame of an online survey, and we used SPSS/AMOS software to test the model. The results only partly supported our hypothesis, although the involvement effects on loyalty and the two mediating constructs were strong enough, loyalty couldn’t be explained well by perceived risk and knowledge. The roles of further mediating/moderating variables should be determined and investigated in the next section of the research series.