942 resultados para Program analysis techniques
Resumo:
Program slicing is a well known family of techniques intended to identify and isolate code fragments which depend on, or are depended upon, specific program entities. This is particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, and corresponding tools, target either the imperative or the object oriented paradigms, where program slices are computed with respect to a variable or a program statement. Taking a complementary point of view, this paper focuses on the slicing of higher-order functional programs under a lazy evaluation strategy. A prototype of a Haskell slicer, built as proof-of-concept for these ideas, is also introduced
Resumo:
COORDINSPECTOR is a Software Tool aiming at extracting the coordination layer of a software system. Such a reverse engineering process provides a clear view of the actually invoked services as well as the logic behind such invocations. The analysis process is based on program slicing techniques and the generation of, System Dependence Graphs and Coordination Dependence Graphs. The tool analyzes Common Intermediate Language (CIL), the native language of the Microsoft .Net Framework, thus making suitable for processing systems developed in any .Net Framework compilable language. COORDINSPECTOR generates graphical representations of the coordination layer together with business process orchestrations specified in WSBPEL 2.0
Resumo:
Graphical user interfaces (GUIs) are critical components of today's open source software. Given their increased relevance, the correctness and usability of GUIs are becoming essential. This paper describes the latest results in the development of our tool to reverse engineer the GUI layer of interactive computing open source systems. We use static analysis techniques to generate models of the user interface behavior from source code. Models help in graphical user interface inspection by allowing designers to concentrate on its more important aspects. One particular type of model that the tool is able to generate is state machines. The paper shows how graph theory can be useful when applied to these models. A number of metrics and algorithms are used in the analysis of aspects of the user interface's quality. The ultimate goal of the tool is to enable analysis of interactive system through GUIs source code inspection.
Resumo:
OBJECTIVE: To identify clusters of the major occurrences of leprosy and their associated socioeconomic and demographic factors. METHODS: Cases of leprosy that occurred between 1998 and 2007 in São José do Rio Preto (southeastern Brazil) were geocodified and the incidence rates were calculated by census tract. A socioeconomic classification score was obtained using principal component analysis of socioeconomic variables. Thematic maps to visualize the spatial distribution of the incidence of leprosy with respect to socioeconomic levels and demographic density were constructed using geostatistics. RESULTS: While the incidence rate for the entire city was 10.4 cases per 100,000 inhabitants annually between 1998 and 2007, the incidence rates of individual census tracts were heterogeneous, with values that ranged from 0 to 26.9 cases per 100,000 inhabitants per year. Areas with a high leprosy incidence were associated with lower socioeconomic levels. There were identified clusters of leprosy cases, however there was no association between disease incidence and demographic density. There was a disparity between the places where the majority of ill people lived and the location of healthcare services. CONCLUSIONS: The spatial analysis techniques utilized identified the poorer neighborhoods of the city as the areas with the highest risk for the disease. These data show that health departments must prioritize politico-administrative policies to minimize the effects of social inequality and improve the standards of living, hygiene, and education of the population in order to reduce the incidence of leprosy.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques.
Resumo:
Graphics processors were originally developed for rendering graphics but have recently evolved towards being an architecture for general-purpose computations. They are also expected to become important parts of embedded systems hardware -- not just for graphics. However, this necessitates the development of appropriate timing analysis techniques which would be required because techniques developed for CPU scheduling are not applicable. The reason is that we are not interested in how long it takes for any given GPU thread to complete, but rather how long it takes for all of them to complete. We therefore develop a simple method for finding an upper bound on the makespan of a group of GPU threads executing the same program and competing for the resources of a single streaming multiprocessor (whose architecture is based on NVIDIA Fermi, with some simplifying assunptions). We then build upon this method to formulate the derivation of the exact worst-case makespan (and corresponding schedule) as an optimization problem. Addressing the issue of tractability, we also present a technique for efficiently computing a safe estimate of the worstcase makespan with minimal pessimism, which may be used when finding an exact value would take too long.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
Paper presented at the ECKM 2010 – 11th European Conference on Knowledge Management, 2-3 September, 2010, Famalicão, Portugal. URL: http://www.academic-conferences.org/eckm/eckm2010/eckm10-home.htm
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Euromicro Conference on Digital System Design (DSD 2015), Funchal, Portugal.
Resumo:
27th Euromicro Conference on Real-Time Systems (ECRTS 2015), Lund, Sweden.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Informática
Resumo:
Release of chloroethene compounds into the environment often results in groundwater contamination, which puts people at risk of exposure by drinking contaminated water. cDCE (cis-1,2-dichloroethene) accumulation on subsurface environments is a common environmental problem due to stagnation and partial degradation of other precursor chloroethene species. Polaromonas sp. strain JS666 apparently requires no exotic growth factors to be used as a bioaugmentation agent for aerobic cDCE degradation. Although being the only suitable microorganism found capable of such, further studies are needed for improving the intrinsic bioremediation rates and fully comprehend the metabolic processes involved. In order to do so, a metabolic model, iJS666, was reconstructed from genome annotation and available bibliographic data. FVA (Flux Variability Analysis) and FBA (Flux Balance Analysis) techniques were used to satisfactory validate the predictive capabilities of the iJS666 model. The iJS666 model was able to predict biomass growth for different previously tested conditions, allowed to design key experiments which should be done for further model improvement and, also, produced viable predictions for the use of biostimulant metabolites in the cDCE biodegradation.
Resumo:
In the trend towards tolerating hardware unreliability, accuracy is exchanged for cost savings. Running on less reliable machines, functionally correct code becomes risky and one needs to know how risk propagates so as to mitigate it. Risk estimation, however, seems to live outside the average programmer’s technical competence and core practice. In this paper we propose that program design by source-to-source transformation be risk-aware in the sense of making probabilistic faults visible and supporting equational reasoning on the probabilistic behaviour of programs caused by faults. This reasoning is carried out in a linear algebra extension to the standard, `a la Bird-Moor algebra of programming. This paper studies, in particular, the propagation of faults across standard program transformation techniques known as tupling and fusion, enabling the fault of the whole to be expressed in terms of the faults of its parts.