978 resultados para Software Testing
Resumo:
A partir da década de noventa do século passado, começaram a surgir no mercado ferramentas de cálculo com o objetivo de agilizar a conceção do projeto de engenharia da construção. Até ao final da década de setenta os computadores existentes eram enormes, apenas entidades de grande poder económico os podiam adquirir. Na década de oitenta surgiu no mercado o PC, Personal Computer, estas pequenas máquinas começaram a ser adquiridas pela generalidade das empresas e em Portugal no final desta década era possível encontrar indivíduos que já possuíam o seu PC. Na década de noventa, a saída de recém-formados das instituições de ensino superior, fomentou no mercado o aparecimento de empresas de informática dedicadas à conceção de software de acordo com as necessidades do próprio mercado, daí resultando software comercial à medida e software comercial de prateleira (COTS, Commercial Off-The-Shelf)). O software comercial, ao ser utilizado por um grande número de pessoas, atingindo facilmente, no caso do COTS, os milhares, tem condições para evoluir de acordo com as exigências sistemáticas do próprio mercado, atingindo elevados patamares no cumprimento de requisitos de qualidade, nomeadamente no que concerne à funcionalidade, fiabilidade, usabilidade, manutenibilidade, eficiência, portabilidade e qualidade na utilização. A utilização de software comercial na área do projeto de engenharia da construção é hoje em dia uma prática absolutamente generalizada. A seleção do software pode tornar-se um processo complexo especialmente naquelas áreas em que existe grande oferta. A utilização de critérios de avaliação bem definidos poderá agilizar o processo e dar maiores garantias no momento da decisão final. Neste documento apresenta-se uma proposta de metodologia para avaliação e comparação de softwares.
Resumo:
In the past years, Software Architecture has attracted increased attention by academia and industry as the unifying concept to structure the design of complex systems. One particular research area deals with the possibility of reconfiguring architectures to adapt the systems they describe to new requirements. Reconfiguration amounts to adding and removing components and connections, and may have to occur without stopping the execution of the system being reconfigured. This work contributes to the formal description of such a process. Taking as a premise that a single formalism hardly ever satisfies all requirements in every situation, we present three approaches, each one with its own assumptions about the systems it can be applied to and with different advantages and disadvantages. Each approach is based on work of other researchers and has the aesthetic concern of changing as little as possible the original formalism, keeping its spirit. The first approach shows how a given reconfiguration can be specified in the same manner as the system it is applied to and in a way to be efficiently executed. The second approach explores the Chemical Abstract Machine, a formalism for rewriting multisets of terms, to describe architectures, computations, and reconfigurations in a uniform way. The last approach uses a UNITY-like parallel programming design language to describe computations, represents architectures by diagrams in the sense of Category Theory, and specifies reconfigurations by graph transformation rules.
Resumo:
Despite the steady increase in experimental deployments, most of research work on WSNs has focused only on communication protocols and algorithms, with a clear lack of effective, feasible and usable system architectures, integrated in a modular platform able to address both functional and non–functional requirements. In this paper, we outline EMMON [1], a full WSN-based system architecture for large–scale, dense and real–time embedded monitoring [3] applications. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. Then, EM-Set, the EMMON engineering toolset will be presented. EM-Set includes a network deployment planning, worst–case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset was crucial for the development of EMMON which was designed to use standard commercially available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. Finally, the EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ nodes testbed.
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
Simulators are indispensable tools to support the development and testing of cooperating objects such as wireless sensor networks (WSN). However, it is often not possible to compare the results of different simulation tools. Thus, the goal of this paper is the specification of a generic simulation platform for cooperating objects. We propose a platform that consists of a set of simulators that together fulfill desired simulator properties. We show that to achieve comparable results the use of a common specification language for the software-under-test is not feasible. Instead, we argue that using common input formats for the simulated environment and common output formats for the results is useful. This again motivates that a simulation tool consisting of a set of existing simulators that are able to use common scenario-input and can produce common output which will bring us a step closer to the vision of achieving comparable simulation results.
Resumo:
The foreseen evolution of chip architectures to higher number of, heterogeneous, cores, with non-uniform memory and non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as an alternative to lock-based synchronisation. However, STM relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upperbounded and task sets can be feasibly scheduled. In this paper we defend the role of the transaction contention manager to reduce the number of transaction retries and to help the real-time scheduler assuring schedulability. For such purpose, the contention management policy should be aware of on-line scheduling information.
Resumo:
The hidden-node problem has been shown to be a major source of Quality-of-Service (QoS) degradation in Wireless Sensor Networks (WSNs) due to factors such as the limited communication range of sensor nodes, link asymmetry and the characteristics of the physical environment. In wireless contention-based Medium Access Control protocols, if two nodes that are not visible to each other transmit to a third node that is visible to the formers, there will be a collision – usually called hidden-node or blind collision. This problem greatly affects network throughput, energy-efficiency and message transfer delays, which might be particularly dramatic in large-scale WSNs. This technical report tackles the hidden-node problem in WSNs and proposes HNAMe, a simple yet efficient distributed mechanism to overcome it. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes and then scales to multiple clusters via a cluster grouping strategy that guarantees no transmission interference between overlapping clusters. We also show that the H-NAMe mechanism can be easily applied to the IEEE 802.15.4/ZigBee protocols with only minor add-ons and ensuring backward compatibility with the standard specifications. We demonstrate the feasibility of H-NAMe via an experimental test-bed, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. We believe that the results in this technical report will be quite useful in efficiently enabling IEEE 802.15.4/ZigBee as a WSN protocol.
Resumo:
The aim of the TeleRisk Project on labour relations and professional risks within the context of teleworking in Portugal – supported by IDICT – Institute for Development and Inspection of Working Conditions (Ministry of Labour), is to study the practices and forms of teleworking in the manufacturing sectors in Portugal. The project chose also the software industry as a reference sector, even though it does not intend to exclude from the study any other sector of activity or the so-called “hybrid” forms of work. However, the latter must have some of the characteristics of telework. The project thus takes into account the so-called “traditional” sectors of activity, namely textile and machinery and metal engineering (machinery and equipment), not usually associated to this type of work. However, telework could include, in the so-called “traditional” sectors, other variations that are not found in technologically based sectors. One of the evaluation methods for the dynamics associated to telework consisted in carrying out surveys by means of questionnaires, aimed at employers in the sectors analysed. This paper presents some of the results of those surveys. It is important to mention that, being a preliminary analysis, it means that it does not pretend to have exhausted all the issues in the survey, but has meant that it shows the bigger tendencies, in terms of teleworking practices, of the Portuguese industry.
Resumo:
Consider a multihop network comprising Ethernet switches. The traffic is described with flows and each flow is characterized by its source node, its destination node, its route and parameters in the generalized multiframe model. Output queues on Ethernet switches are scheduled by static-priority scheduling and tasks executing on the processor in an Ethernet switch are scheduled by stride scheduling. We present schedulability analysis for this setting.
Resumo:
Gravity loads can affect a reinforced concrete structure's response to seismic actions, however, traditional procedures for testing the beam behaviour do not take this effect into consideration. An experimental campaign was carried out in order to assess the influence of the gravity load on RC beam connection to the column subjected to cyclic loading. The experiments included the imposition of a conventional quasi-static test protocol based on the imposition of a reverse cyclic displacement history and of an alternative cyclic test procedure starting from the gravity load effects. The test results are presented, compared and analysed in this paper. The imposition of a cyclic test procedure that included the gravity loads effects on the RC beam ends reproduces the demands on the beams' critical zones more realistically than the traditional procedure. The consideration of the vertical load effects in the test procedure led to an accumulation of negative (hogging) deformation. This phenomenon is sustained with the behaviour of a portal frame system under cyclic loads subject to a significant level of the vertical load, leading to the formation of unidirectional plastic hinges. In addition, the hysteretic behaviour of the RC beam ends tested was simulated numerically using the nonlinear structural analysis software - OpenSees. The beam-column model simulates the global element behaviour very well, as there is a reasonable approximation to the hysteretic loops obtained experimentally. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Informática.
Resumo:
In this work, an experimental study was performed on the influence of plug-filling, loading rate and temperature on the tensile strength of single-strap (SS) and double-strap (DS) repairs on aluminium structures. Whilst the main purpose of this work was to evaluate the feasibility of plug-filling for the strength improvement of these repairs, a parallel study was carried out to assess the sensitivity of the adhesive to external features that can affect the repairs performance, such as the rate of loading and environmental temperature. The experimental programme included repairs with different values of overlap length (L O = 10, 20 and 30 mm), and with and without plug-filling, whose results were interpreted in light of experimental evidence of the fracture modes and typical stress distributions for bonded repairs. The influence of the testing speed on the repairs strength was also addressed (considering 0.5, 5 and 25 mm/min). Accounting for the temperature effects, tests were carried out at room temperature (≈23°C), 50 and 80°C. This permitted a comparative evaluation of the adhesive tested below and above the glass transition temperature (T g), established by the manufacturer as 67°C. The combined influence of these two parameters on the repairs strength was also analysed. According to the results obtained from this work, design guidelines for repairing aluminium structures were
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
OBJECTIVE Determine the coverage rate of syphilis testing during prenatal care and the prevalence of syphilis in pregnant women in Brazil. METHODS This is a national hospital-based cohort study conducted in Brazil with 23,894 postpartum women between 2011 and 2012. Data were obtained using interviews with postpartum women, hospital records, and prenatal care cards. All postpartum women with a reactive serological test result recorded in the prenatal care card or syphilis diagnosis during hospitalization for childbirth were considered cases of syphilis in pregnancy. The Chi-square test was used for determining the disease prevalence and testing coverage rate by region of residence, self-reported skin color, maternal age, and type of prenatal and child delivery care units. RESULTS Prenatal care covered 98.7% postpartum women. Syphilis testing coverage rate was 89.1% (one test) and 41.2% (two tests), and syphilis prevalence in pregnancy was 1.02% (95%CI 0.84;1.25). A lower prenatal coverage rate was observed among women in the North region, indigenous women, those with less education, and those who received prenatal care in public health care units. A lower testing coverage rate was observed among residents in the North, Northeast, and Midwest regions, among younger and non-white skin-color women, among those with lower education, and those who received prenatal care in public health care units. An increased prevalence of syphilis was observed among women with < 8 years of education (1.74%), who self-reported as black (1.8%) or mixed (1.2%), those who did not receive prenatal care (2.5%), and those attending public (1.37%) or mixed (0.93%) health care units. CONCLUSIONS The estimated prevalence of syphilis in pregnancy was similar to that reported in the last sentinel surveillance study conducted in 2006. There was an improvement in prenatal care and testing coverage rate, and the goals suggested by the World Health Organization were achieved in two regions. Regional and social inequalities in access to health care units, coupled with other gaps in health assistance, have led to the persistence of congenital syphilis as a major public health problem in Brazil.
Resumo:
Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia