999 resultados para Sistemas de avaliação
Resumo:
Brazil is going through the process from analogical transmission to digital transmission. This new technology, in addition to providing a high quality audio and video, also allows applications to execute on television. Equipment called Set-Top Box are needed to allow the reception of this new signal and create the appropriate environment necessary to execute applications. At first, the only way to interact with these applications is given by remote control. However, the remote control has serious usability problems when used to interact with some types of applications. This research suggests a software resources implementation capable to create a environment that allows a smartphone to interact with applications. Besides this implementation, is performed a comparative study between use remote controle and smartphones to interact with applications of digital television, taking into account parameters related to usability. After analysis of data collected by the comparative study is possible to identify which device provides an interactive experience more interesting for users
Resumo:
Product derivation tools are responsible for automating the development process of software product lines. The configuration knowledge, which is responsible for mapping the problem space to the solution space, plays a fundamental role on product derivation approaches. Each product derivation approach adopts different strategies and techniques to manage the existing variabilities in code assets. There is a lack of empirical studies to analyze these different approaches. This dissertation has the aim of comparing systematically automatic product derivation approaches through of the development of two different empirical studies. The studies are analyzed under two perspectives: (i) qualitative that analyzes the characteristics of approaches using specific criteria; and (ii) quantitative that quantifies specific properties of product derivation artifacts produced for the different approaches. A set of criteria and metrics are also being proposed with the aim of providing support to the qualitative and quantitative analysis. Two software product lines from the web and mobile application domains are targets of our study
Resumo:
There is a need for multi-agent system designers in determining the quality of systems in the earliest phases of the development process. The architectures of the agents are also part of the design of these systems, and therefore also need to have their quality evaluated. Motivated by the important role that emotions play in our daily lives, embodied agents researchers have aimed to create agents capable of producing affective and natural interaction with users that produces a beneficial or desirable result. For this, several studies proposing architectures of agents with emotions arose without the accompaniment of appropriate methods for the assessment of these architectures. The objective of this study is to propose a methodology for evaluating architectures emotional agents, which evaluates the quality attributes of the design of architectures, in addition to evaluation of human-computer interaction, the effects on the subjective experience of users of applications that implement it. The methodology is based on a model of well-defined metrics. In assessing the quality of architectural design, the attributes assessed are: extensibility, modularity and complexity. In assessing the effects on users' subjective experience, which involves the implementation of the architecture in an application and we suggest to be the domain of computer games, the metrics are: enjoyment, felt support, warm, caring, trust, cooperation, intelligence, interestingness, naturalness of emotional reactions, believabiliy, reducing of frustration and likeability, and the average time and average attempts. We experimented with this approach and evaluate five architectures emotional agents: BDIE, DETT, Camurra-Coglio, EBDI, Emotional-BDI. Two of the architectures, BDIE and EBDI, were implemented in a version of the game Minesweeper and evaluated for human-computer interaction. In the results, DETT stood out with the best architectural design. Users who have played the version of the game with emotional agents performed better than those who played without agents. In assessing the subjective experience of users, the differences between the architectures were insignificant
Resumo:
The academic community and software industry have shown, in recent years, substantial interest in approaches and technologies related to the area of model-driven development (MDD). At the same time, continues the relentless pursuit of industry for technologies to raise productivity and quality in the development of software products. This work aims to explore those two statements, through an experiment carried by using MDD technology and evaluation of its use on solving an actual problem under the security context of enterprise systems. By building and using a tool, a visual DSL denominated CALV3, inspired by the software factory approach: a synergy between software product line, domainspecific languages and MDD, we evaluate the gains in abstraction and productivity through a systematic case study conducted in a development team. The results and lessons learned from the evaluation of this tool within industry are the main contributions of this work
Resumo:
The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared
Resumo:
Software Repository Mining (MSR) is a research area that analyses software repositories in order to derive relevant information for the research and practice of software engineering. The main goal of repository mining is to extract static information from repositories (e.g. code repository or change requisition system) into valuable information providing a way to support the decision making of software projects. On the other hand, another research area called Process Mining (PM) aims to find the characteristics of the underlying process of business organizations, supporting the process improvement and documentation. Recent works have been doing several analyses through MSR and PM techniques: (i) to investigate the evolution of software projects; (ii) to understand the real underlying process of a project; and (iii) create defect prediction models. However, few research works have been focusing on analyzing the contributions of software developers by means of MSR and PM techniques. In this context, this dissertation proposes the development of two empirical studies of assessment of the contribution of software developers to an open-source and a commercial project using those techniques. The contributions of developers are assessed through three different perspectives: (i) buggy commits; (ii) the size of commits; and (iii) the most important bugs. For the opensource project 12.827 commits and 8.410 bugs have been analyzed while 4.663 commits and 1.898 bugs have been analyzed for the commercial project. Our results indicate that, for the open source project, the developers classified as core developers have contributed with more buggy commits (although they have contributed with the majority of commits), more code to the project (commit size) and more important bugs solved while the results could not indicate differences with statistical significance between developer groups for the commercial project
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
One way to deal with the high complexity of current software systems is through selfadaptive systems. Self-adaptive system must be able to monitor themselves and their environment, analyzing the monitored data to determine the need for adaptation, decide how the adaptation will be performed, and finally, make the necessary adjustments. One way to perform the adaptation of a system is generating, at runtime, the process that will perform the adaptation. One advantage of this approach is the possibility to take into account features that can only be evaluated at runtime, such as the emergence of new components that allow new architectural arrangements which were not foreseen at design time. In this work we have as main objective the use of a framework for dynamic generation of processes to generate architectural adaptation plans on OSGi environment. Our main interest is evaluate how this framework for dynamic generation of processes behave in new environments
Resumo:
A avaliação clínica dos membros inferiores na insuficiência venosa por si só não identifica os sistemas envolvidos ou os níveis anatômicos, sendo necessários exames complementares. Esses exames podem ser invasivos ou não-invasivos. Os invasivos, como flebografia e pressão venosa ambulatória, apesar de terem boa acurácia, trazem desconforto e complicações. Dentre os não-invasivos, destacam-se: Doppler ultra-som de ondas contínuas, fotopletismografia, pletismografia a ar e mapeamento dúplex. O Doppler ultra-som avalia a velocidade do fluxo sangüíneo de maneira indireta. A fotopletismografia avalia o tempo de reenchimento venoso, fornecendo um parâmetro objetivo de quantificação do refluxo venoso. A pletismografia a ar permite quantificar a redução ou não da capacitância, o refluxo e o desempenho da bomba muscular da panturrilha. O dúplex é considerado padrão-ouro dentre os não-invasivos, porque permite uma avaliação quantitativa e qualitativa, fornecendo informações anatômicas e funcionais, dando avaliação mais completa e detalhada dos sistemas venosos profundo e superficial.
Resumo:
A avaliação do ciclo de vida (ACV) é uma metodologia de avaliação de impacto ambiental de produtos e sistemas de produção considerando todo o ciclo de vida, desde a aquisição de matérias-primas até a disposição final. Este trabalho consistiu na investigação do progresso dos estudos sobre ACV no Brasil, por meio de uma pesquisa bibliográfica em eventos e periódicos oficiais ou reconhecidos pela Associação Brasileira de Engenharia de Produção e na base de dados SciELO Brasil. Foram identificados 80 artigos, a maioria de instituições das regiões Sul e Sudeste. A Universidade de São Paulo (USP) e Universidade Federal de Santa Catarina (UFSC) apresentaram o maior número de publicações dentre as 50 instituições identificadas. Verificou-se que 17 artigos aplicaram efetivamente a metodologia ACV em um estudo de caso, sendo que 11 utilizaram a metodologia para avaliar processo produtivo e 6 para comparar materiais ou processos
Resumo:
Atualmente, diante das técnicas atuais, a manometria tem sido relegada a plano secundário durante a cateterização cardíaca. No entanto, ainda fornece importantes informações para identificação e avaliação das doenças cardiovasculares. Os dados coletados durante os exames possibilitam a obtenção de variáveis quantitativas e qualitativas, as quais podem ser comparadas aos padrões normais. Os sistemas manométricos são compostos por transdutor, amplificador e registrador, que, em conjunto, devem espelhar com fidelidade a morfologia e os valores das variáveis analisadas. Para atingir esse objetivo, é necessário desempenho adequado de todos os componentes. Se uma determinada informação é de extrema relevância, o operador deve gastar tempo suficiente para obtê-la de maneira inequívoca. Assim, o operador deve estar familiarizado com os sistemas manométricos e com as fontes de erro relacionadas com as técnicas de registro, cateteres, conectores e fluidos. Com os fundamentos analisados neste manuscrito, salientamos que deve ser dispensada atenção às ondas de pressão usadas nas interpretações da fisiopatologia das doenças cardiovasculares.
Resumo:
Nowadays, there are many aspect-oriented middleware implementations that take advantage of the modularity provided by the aspect oriented paradigm. Although the works always present an assessment of the middleware according to some quality attribute, there is not a specific set of metrics to assess them in a comprehensive way, following various quality attributes. This work aims to propose a suite of metrics for the assessment of aspect-oriented middleware systems at different development stages: design, refactoring, implementation and runtime. The work presents the metrics and how they are applied at each development stage. The suite is composed of metrics associated to static properties (modularity, maintainability, reusability, exibility, complexity, stability, and size) and dynamic properties (performance and memory consumption). Such metrics are based on existing assessment approaches of object-oriented and aspect-oriented systems. The proposed metrics are used in the context of OiL (Orb in Lua), a middleware based on CORBA and implemented in Lua, and AO-OiL, the refactoring of OIL that follows a reference architecture for aspect-oriented middleware systems. The case study performed in OiL and AO-OiL is a system for monitoring of oil wells. This work also presents the CoMeTA-Lua tool to automate the collection of coupling and size metrics in Lua source code
Resumo:
Este artigo apresenta o desenvolvimento, validação e utilização de uma metodologia de avaliação da qualidade dos serviços de atenção primária do Sistema Único de Saúde (SUS), o Questionário de Avaliação da Qualidade de Serviços de Atenção Básica (QualiAB). Destina-se aos serviços de atenção básica, organizados segundo diferentes modelos de atenção, incluindo a Saúde da Família. Contém 50 indicadores sobre oferta e organização do trabalho assistencial e programático e 15 sobre gerenciamento, na forma de questões de múltipla escolha, autorespondidas via web pela equipe local do serviço. Confere a cada resposta valor zero, um ou dois; a média geral atribui ao serviço um grau de qualidade expresso pela distância do melhor padrão correspondente à média dois. Foi construído por processo de consenso interativo, que incluiu metodologias qualitativas, teste-piloto, aplicação em 127 serviços, validação de construto e confiabilidade. Respondido, em 2007, por 598 (92%) dos serviços de 115 municípios paulistas, mostrou bom poder para discriminar níveis de qualidade. Adotado em 2010 como parte de um programa de apoio à Atenção Básica da Secretaria de Estado da Saúde de São Paulo, foi respondido por 95% (2.735) dos serviços de 586 municípios (90,8% do Estado). Os resultados foram encaminhados aos municípios. O QualiAB fornece uma avaliação válida, simples e com a possibilidade de retorno imediato para gerentes e profissionais. Mostrou factibilidade, aceitabilidade, bom poder de discriminação e utilidade para auxiliar a gestão da rede de atenção básica do SUS em São Paulo. A experiência indica aplicabilidade nas redes de atenção básica do Brasil.
Resumo:
Para decidir qual método de avaliação do sistema radicular, é necessário ponderar sobre os objetivos do trabalho, a cultura em questão e as condições em que ela se desenvolve. O estudo de raízes é muito importante para a compreensão dos diversos fenômenos de crescimento e desenvolvimento da parte aérea, mas exige procedimentos extremamente criteriosos, pois, além de ser trabalhoso, seus resultados são influenciados pela variabilidade físico-química do solo. Objetivou-se, com esta pesquisa, comparar os resultados de cinco métodos de avaliação do sistema radicular, em duas variedades de cana-de-açúcar, em quatro profundidades e em dois sistemas de colheita: mecanizada de cana crua e manual de cana queimada. Foram comparados ao método de avaliação por extração de monólitos e pesagem de massa de raízes secas outros quatro métodos: monólito com medição de comprimento, trado com pesagem de massa seca, perfil com medição de comprimento por meio de imagens digitais e perfil com contagem do número de raízes. Constatou-se que regressões lineares expressaram adequadamente a relação entre os métodos estudados, exceto quando foi utilizado o trado. Os métodos de perfil foram os mais adequados para detectar diferenças entre tratamentos.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)