996 resultados para Teste em sistemas de software
Resumo:
With the ever increasing demands for high complexity consumer electronic products, market pressures demand faster product development and lower cost. SoCbased design can provide the required design flexibility and speed by allowing the use of IP cores. However, testing costs in the SoC environment can reach a substantial percent of the total production cost. Analog testing costs may dominate the total test cost, as testing of analog circuits usually require functional verification of the circuit and special testing procedures. For RF analog circuits commonly used in wireless applications, testing is further complicated because of the high frequencies involved. In summary, reducing analog test cost is of major importance in the electronic industry today. BIST techniques for analog circuits, though potentially able to solve the analog test cost problem, have some limitations. Some techniques are circuit dependent, requiring reconfiguration of the circuit being tested, and are generally not usable in RF circuits. In the SoC environment, as processing and memory resources are available, they could be used in the test. However, the overhead for adding additional AD and DA converters may be too costly for most systems, and analog routing of signals may not be feasible and may introduce signal distortion. In this work a simple and low cost digitizer is used instead of an ADC in order to enable analog testing strategies to be implemented in a SoC environment. Thanks to the low analog area overhead of the converter, multiple analog test points can be observed and specific analog test strategies can be enabled. As the digitizer is always connected to the analog test point, it is not necessary to include muxes and switches that would degrade the signal path. For RF analog circuits, this is specially useful, as the circuit impedance is fixed and the influence of the digitizer can be accounted for in the design phase. Thanks to the simplicity of the converter, it is able to reach higher frequencies, and enables the implementation of low cost RF test strategies. The digitizer has been applied successfully in the testing of both low frequency and RF analog circuits. Also, as testing is based on frequency-domain characteristics, nonlinear characteristics like intermodulation products can also be evaluated. Specifically, practical results were obtained for prototyped base band filters and a 100MHz mixer. The application of the converter for noise figure evaluation was also addressed, and experimental results for low frequency amplifiers using conventional opamps were obtained. The proposed method is able to enhance the testability of current mixed-signal designs, being suitable for the SoC environment used in many industrial products nowadays.
Resumo:
A execução de testes é um passo essencial na adoção de novos protocolos de comunicação e sistemas distribuídos. A forma com que estes se comportam na presença de falhas, tão comuns em ambientes geograficamente distribuídos, deve ser conhecida e considerada. Testes sob condições de falha devem ser realizados e as implementações devem trabalhar dentro de sua especificação nestas condições, garantindo explicitamente o funcionamento dos seus mecanismos de detecção e recuperação de erros. Para a realização de tais testes, uma técnica poderosa é a injeção de falhas. Ferramentas de injeção de falhas permitem ao projetista ou engenheiro de testes medir a eficiência dos mecanismos de um sistema antes que o mesmo seja colocado em operação efetiva. Este trabalho apresenta o projeto, desenvolvimento e teste do injetor de falhas FIRMAMENT. Esta ferramenta executa, dentro do núcleo do sistema operacional, microprogramas, ou faultlets, sobre cada mensagem processada para a emulação de situações de falha de comunicação, utilizando uma abordagem de scripts. A ferramenta é implementada como um módulo de núcleo do sistema operacional Linux, tendo acesso total aos fluxos de entrada e saída de pacotes de forma limpa e não intrusiva, permitindo o teste de sistemas baseados nos protocolos IPv4 e IPv6. Seu desempenho é significativo, já que a ferramenta evita que os mecanismos de injeção de falhas sejam invocados nos fluxos que não sejam de interesse aos testes, bem como dispensa a cópia de dados dos pacotes de comunicação a serem inspecionados e manipulados. A aplicabilidade da ferramenta, dada pela sua facilidade de integração a um ambiente de produção, é conseqüência de sua disponibilidade como um módulo de núcleo, podendo ser carregada como um plugin em um núcleo não modificado. As instruções por FIRMAMENT suportadas lhe dão alto poder de expressão dos cenários de falhas. Estas instruções permitem a inspeção e seleção de mensagens de forma determinística ou estatística. Além disso, fornecem diversas ações a serem realizadas sobre os pacotes de comunicação e sobre as variáveis internas do injetor, fazendo-o imitar o comportamento de falhas reais, como descarte e duplicação de mensagens, atraso na sua entrega e modificação de seu conteúdo. Estas características tornam a ferramenta apropriada para a realização de experimentos sobre protocolos e sistemas distribuídos.
Resumo:
O material apresenta uma visão geral de subsistemas de entrada e saída (E/S) de dados, destacando seu gerenciamento. O texto traz também as atuações do Sistema Operacional nas operações de E/S (emitir comandos para os dispositivos, atender interrupções geradas pelos dispositivos, tratar erros nas operações desses dispositivos, prover uma interface para utilização dos dispositivos), os aspectos de hardware, os aspectos de software e seus objetivos (independência de dispositivo, nomeação uniforme; manipulação de erros; transferência síncrona (bloqueante) e assíncrona (orientada a interrupções); dispositivos compartilhados e dedicados). Por fim, destaca uma forma de estruturação do software de E/S através de: Tratadores de interrupção, controladores de dispositivos, software de E/S independente de dispositivo e Software de E/S ao nível do usuário.
Resumo:
O material apresenta o gerenciamento de threads no Windows. Um sistema operacional envolve atividades assíncronas e algumas vezes paralelas, sendo que a noção de processo de software é usada em sistemas operacionais para expressar o gerenciamento e controle de suas atividades. Processo é um dos conceitos fundamentais no projeto de um sistema operacional moderno. Threads podem ser gerenciados pelo sistema operacional ou pela aplicação do usuário. Além desses conceitos, o material destaca a motivação para o uso de threads; o parâmetro CreateThread; e a prioridade e escalonamento.
Resumo:
Esta tese descreve o desenvolvimento de duas aplicações de software cujo objetivo consiste em demonstrar o funcionamento de duas ferramentas base de Sistemas Digitais. A primeira aplicação, denominada KarnUMa, pretende demonstrar o funcionamento dos Mapas de Karnaugh, que são utilizados na simplificação de expressões algébricas Booleanas. Esta aplicação está disponível em duas versões com destino a duas plataformas distintas: a primeira KarnUMa, disponível para computador, e a segunda Pocket KarnUMa, disponível para terminais móveis sob a forma de Midlet ou Android Package. A segunda aplicação desenvolvida, denominada ParTec, terá como objetivo demonstrar o funcionamento da Técnica das Partições, que é utilizada na eliminação de estados redundantes nas máquinas de estados. Esta segunda aplicação tem apenas o computador como plataforma de destino. Este documento conta com um levantamento das aplicações atualmente existentes nas áreas de interesse, uma descrição das tecnologias utilizadas no desenvolvimento do software, uma apresentação desse mesmo software descrevendo o que este tem de inovador e por fim uma descrição da forma como as aplicações foram publicadas.
Uma abordagem para promover o alinhamento entre a estratégia de negócio e a tecnologia de informação
Resumo:
Currently with the increase in complexity in doing business, organizations are seeking information systems that help to quickly respond to new demands in the processes of production of products and services. An information system is no longer just a support tool and has become an integral part of doing business. However, in spite of significant technological evolution in recent years, information systems that support business do not respond efficiently to the constant alterations that occur in many organizations. One of the main problems faced by information systems currently is the lack of strategic alignment between business strategy and information technology. The concept of strategic alignment can be defined as a way between business strategies and objectives and the strategies, objectives and functions of information technology in such as way as to contribute to the increase in competitivity of the organization over time. Strategic alignment together with strategic planning are important management instruments. Approaches for operationalizing this alignment are being developed currently but are still in their initial stages due to the fact that it is a relatively new concept in the literature. Another point that needs to be taken into consideration during the strategic alignment is the question of trackability between the business elements and IT. Trackability (Tracking) is necessary for example when one wishes to know exactly which goal defined in the business strategy was left out or not accepted due to a modification made in the IT strategy. Very few proposals present concrete ways supported by software systems in order to obtain strategic alignement while taking into consideration this trackability. Therefore the objective of this work is to propose the creation of a strategic alignment process supported by a software system which is capable of permitting trackability between the organizational objectives and the business processes based on formalization standards defined through a model oriented approach
Resumo:
The software systems development with domain-specific languages has become increasingly common. Domain-specific languages (DSLs) provide increased of the domain expressiveness, raising the abstraction level by facilitating the generation of models or low-level source code, thus increasing the productivity of systems development. Consequently, methods for the development of software product lines and software system families have also proposed the adoption of domain-specific languages. Recent studies have investigated the limitations of feature model expressiveness and proposing the use of DSLs as a complement or substitute for feature model. However, in complex projects, a single DSL is often insufficient to represent the different views and perspectives of development, being necessary to work with multiple DSLs. In order to address new challenges in this context, such as the management of consistency between DSLs, and the need to methods and tools that support the development with multiple DSLs, over the past years, several approaches have been proposed for the development of generative approaches. However, none of them considers matters relating to the composition of DSLs. Thus, with the aim to address this problem, the main objectives of this dissertation are: (i) to investigate the adoption of the integrated use of feature models and DSLs during the domain and application engineering of the development of generative approaches; (ii) to propose a method for the development of generative approaches with composition DSLs; and (iii) to investigate and evaluate the usage of modern technology based on models driven engineering to implement strategies of integration between feature models and composition of DSLs
Resumo:
Self-adaptive software system is able to change its structure and/or behavior at runtime due to changes in their requirements, environment or components. One way to archieve self-adaptation is the use a sequence of actions (known as adaptation plans) which are typically defined at design time. This is the approach adopted by Cosmos - a Framework to support the configuration and management of resources in distributed environments. In order to deal with the variability inherent of self-adaptive systems, such as, the appearance of new components that allow the establishment of configurations that were not envisioned at development time, this dissertation aims to give Cosmos the capability of generating adaptation plans of runtime. In this way, it was necessary to perform a reengineering of the Cosmos Framework in order to allow its integration with a mechanism for the dynamic generation of adaptation plans. In this context, our work has been focused on conducting a reengineering of Cosmos. Among the changes made to in the Cosmos, we can highlight: changes in the metamodel used to represent components and applications, which has been redefined based on an architectural description language. These changes were propagated to the implementation of a new Cosmos prototype, which was then used for developing a case study application for purpose of proof of concept. Another effort undertaken was to make Cosmos more attractive by integrating it with another platform, in the case of this dissertation, the OSGi platform, which is well-known and accepted by the industry
Resumo:
One way to deal with the high complexity of current software systems is through selfadaptive systems. Self-adaptive system must be able to monitor themselves and their environment, analyzing the monitored data to determine the need for adaptation, decide how the adaptation will be performed, and finally, make the necessary adjustments. One way to perform the adaptation of a system is generating, at runtime, the process that will perform the adaptation. One advantage of this approach is the possibility to take into account features that can only be evaluated at runtime, such as the emergence of new components that allow new architectural arrangements which were not foreseen at design time. In this work we have as main objective the use of a framework for dynamic generation of processes to generate architectural adaptation plans on OSGi environment. Our main interest is evaluate how this framework for dynamic generation of processes behave in new environments
Resumo:
Mainstream programming languages provide built-in exception handling mechanisms to support robust and maintainable implementation of exception handling in software systems. Most of these modern languages, such as C#, Ruby, Python and many others, are often claimed to have more appropriated exception handling mechanisms. They reduce programming constraints on exception handling to favor agile changes in the source code. These languages provide what we call maintenance-driven exception handling mechanisms. It is expected that the adoption of these mechanisms improve software maintainability without hindering software robustness. However, there is still little empirical knowledge about the impact that adopting these mechanisms have on software robustness. This work addresses this gap by conducting an empirical study aimed at understanding the relationship between changes in C# programs and their robustness. In particular, we evaluated how changes in the normal and exceptional code were related to exception handling faults. We applied a change impact analysis and a control flow analysis in 100 versions of 16 C# programs. The results showed that: (i) most of the problems hindering software robustness in those programs are caused by changes in the normal code, (ii) many potential faults were introduced even when improving exception handling in C# code, and (iii) faults are often facilitated by the maintenance-driven flexibility of the exception handling mechanism. Moreover, we present a series of change scenarios that decrease the program robustness
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Objetivo:Analisar a associação entre a massa óssea e capacidade funcional de idosos com 80 anos ou mais.Métodos:A amostra foi composta por 93 idosos entre 80 e 91 anos (83,2 ± 2,5 anos), 61 mulheres (83,3 ± 2,7 anos) e 32 homens (83,1 ± 2,2 anos) da cidade de Presidente Prudente. A avaliação da massa óssea foi feita pela absorptiometria de dupla energia de raios X (DXA), na qual foram mensurados os valores de conteúdo mineral ósseo (BMC) e densidade mineral óssea (BMD) do fêmur e da coluna (L1-L4). A capacidade funcional foi avaliada por meio dos testes de velocidade para caminhar, equilíbrio estático e força de membros inferiores contidos no questionário Saúde, Bem-Estar e Envelhecimento (Sabe). As variáveis da massa óssea e capacidade funcional foram categorizadas de acordo com os valores de mediana e a pontuação obtida nos testes, respectivamente. Para tratamento estatístico fez-se o teste qui-quadrado, o software usado foi SPSS (13.0) e o nível de significância estabelecido foi de 5%.Resultados:Os idosos do sexo masculino com maior desempenho nos testes funcionais apresentaram maiores valores de BMC de fêmur comparados com os de menor desempenho, resultado não encontrado quando avaliadas as mulheres.Conclusão:Dessa forma, a massa óssea do fêmur para idosos longevos do sexo masculino está associada à capacidade funcional. A avaliação constante da massa mineral óssea e a prática de atividade física ao longo da vida seriam medidas para prevenção das quedas em idosos.
Resumo:
Different vocabularies and contexts are barriers to the communication between people or software systems. It is necessary a common understanding in the domain that is talked about, so it can be obtained a correct interpretation of the information. An ontology formally models the structure of a domain and turn explicit the shared understanding in the form of concepts and relations that emerge from its observation. Constitutes a sort of framework used in the mapping to the meaning of the information that is talked about. The formal accuracy in which they are defined, by means of axioms, allow machine processing, implicating in systems interoperability. Structured this way, the knowledge is easily transferred between people or systems from different contexts. Ontologies present several applications nowadays. They are considered the infra-structure to the Semantic Web, which is composed by Web resources with embedded meaning. Thereby, the automatic execution of complex tasks is allowed, with the benefit from the effective communication between Web software agents. Among other applications, they also have been used to structure the knowledge generated from several areas, like Biology and Software Engineering.
Resumo:
A Mineração de Aspectos visa a identificar potenciais interesses transversais em código fonte de programa e a Refatoração para Aspectos visa a encapsulá-los em aspectos. A Mineração de Aspectos é um processo não-automático, pois o usuário precisa analisar e compreender os resultados gerados por técnicas/ferramentas e confirmar interesses transversais para refatorá-los em aspectos. Neste trabalho é proposta uma abordagem visual que lida com resultados gerados por duas técnicas de mineração de aspectos propostas na literatura. Por meio de múltiplas visões coordenadas, diferentes níveis de detalhe para explorar sistemas de software apóiam a análise e a compreensão de tais resultados para futura refatoração. O modelo de coordenação, implementado na ferramenta SoftVis4CA, é apresentado neste trabalho, juntamente com as visualizações e com os resultados preliminares obtidos.
Resumo:
La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.