984 resultados para Code-centric development
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.
Resumo:
L’arrivée du spectromètre imageur à transformée de Fourier SITELLE au télescope Canada-France-Hawaï souligne la nécessité d’un calculateur de temps d’exposition permettant aux utilisateurs de l’instrument de planifier leurs observations et leurs demandes de temps de télescope. Une grande partie de mon projet est ainsi le développement d’un code de simulation capable de reproduire les résultats de SITELLE et de son prédecesseur SpIOMM, installé à l’Observatoire du Mont-Mégantic. La précision des simulations est confirmée par une comparaison avec des données SpIOMM et les premières observations de SITELLE. La seconde partie de mon projet consiste en une analyse spectrale de données observationelles. Prenant avantage du grand champ de vue de SpIOMM, les caractéristiques du gaz ionisé (vitesse radiale et intensité) sont étudiées pour l’ensemble de la paire de galaxies en interaction Arp 72. La courbe de rotation dans le visible ainsi que le gradient de métallicité de NGC 5996, la galaxie principale d’Arp 72, sont obtenues ici pour la première fois. La galaxie spirale NGC 7320 est également étudiée à partir d’observations faites à la fois avec SpIOMM et SITELLE.
Resumo:
Recent paradigms in wireless communication architectures describe environments where nodes present a highly dynamic behavior (e.g., User Centric Networks). In such environments, routing is still performed based on the regular packet-switched behavior of store-and-forward. Albeit sufficient to compute at least an adequate path between a source and a destination, such routing behavior cannot adequately sustain the highly nomadic lifestyle that Internet users are today experiencing. This thesis aims to analyse the impact of the nodes’ mobility on routing scenarios. It also aims at the development of forwarding concepts that help in message forwarding across graphs where nodes exhibit human mobility patterns, as is the case of most of the user-centric wireless networks today. The first part of the work involved the analysis of the mobility impact on routing, and we found that node mobility significance can affect routing performance, and it depends on the link length, distance, and mobility patterns of nodes. The study of current mobility parameters showed that they capture mobility partially. The routing protocol robustness to node mobility depends on the routing metric sensitivity to node mobility. As such, mobility-aware routing metrics were devised to increase routing robustness to node mobility. Two categories of routing metrics proposed are the time-based and spatial correlation-based. For the validation of the metrics, several mobility models were used, which include the ones that mimic human mobility patterns. The metrics were implemented using the Network Simulator tool using two widely used multi-hop routing protocols of Optimized Link State Routing (OLSR) and Ad hoc On Demand Distance Vector (AODV). Using the proposed metrics, we reduced the path re-computation frequency compared to the benchmark metric. This means that more stable nodes were used to route data. The time-based routing metrics generally performed well across the different node mobility scenarios used. We also noted a variation on the performance of the metrics, including the benchmark metric, under different mobility models, due to the differences in the node mobility governing rules of the models.
Resumo:
The main objective of this work was to develop an application capable of determining the diffusion times and diffusion coefficients of optical clearing agents and water inside a known type of muscle. Different types of chemical agents can also be used with the method implemented, such as medications or metabolic products. Since the diffusion times can be calculated, it is possible to describe the dehydration mechanism that occurs in the muscle. The calculation of the diffusion time of an optical clearing agent allows to characterize the refractive index matching mechanism of optical clearing. By using both the diffusion times and diffusion of water and clearing agents not only the optical clearing mechanisms are characterized, but also information about optical clearing effect duration and magnitude is obtained. Such information is crucial to plan a clinical intervention in cooperation with optical clearing. The experimental method and equations implemented in the developed application are described in throughout this document, demonstrating its effectiveness. The application was developed in MATLAB code, but the method was personalized so it better fits the application needs. This process significantly improved the processing efficiency, reduced the time to obtain he results, multiple validations prevents common errors and some extra functionalities were added such as saving application progress or export information in different formats. Tests were made using glucose measurements in muscle. Some of the data, for testing purposes, was also intentionally changed in order to obtain different simulations and results from the application. The entire project was validated by comparing the calculated results with the ones found in literature, which are also described in this document.
Resumo:
The Graphical User Interface (GUI) is an integral component of contemporary computer software. A stable and reliable GUI is necessary for correct functioning of software applications. Comprehensive verification of the GUI is a routine part of most software development life-cycles. The input space of a GUI is typically large, making exhaustive verification difficult. GUI defects are often revealed by exercising parts of the GUI that interact with each other. It is challenging for a verification method to drive the GUI into states that might contain defects. In recent years, model-based methods, that target specific GUI interactions, have been developed. These methods create a formal model of the GUI’s input space from specification of the GUI, visible GUI behaviors and static analysis of the GUI’s program-code. GUIs are typically dynamic in nature, whose user-visible state is guided by underlying program-code and dynamic program-state. This research extends existing model-based GUI testing techniques by modelling interactions between the visible GUI of a GUI-based software and its underlying program-code. The new model is able to, efficiently and effectively, test the GUI in ways that were not possible using existing methods. The thesis is this: Long, useful GUI testcases can be created by examining the interactions between the GUI, of a GUI-based application, and its program-code. To explore this thesis, a model-based GUI testing approach is formulated and evaluated. In this approach, program-code level interactions between GUI event handlers will be examined, modelled and deployed for constructing long GUI testcases. These testcases are able to drive the GUI into states that were not possible using existing models. Implementation and evaluation has been conducted using GUITAR, a fully-automated, open-source GUI testing framework.
Resumo:
The purpose of this paper is twofold. Firstly it presents a preliminary and ethnomethodologically-informed analysis of the way in which the growing structure of a particular program's code was ongoingly derived from its earliest stages. This was motivated by an interest in how the detailed structure of completed program `emerged from nothing' as a product of the concrete practices of the programmer within the framework afforded by the language. The analysis is broken down into three sections that discuss: the beginnings of the program's structure; the incremental development of structure; and finally the code productions that constitute the structure and the importance of the programmer's stock of knowledge. The discussion attempts to understand and describe the emerging structure of code rather than focus on generating `requirements' for supporting the production of that structure. Due to time and space constraints, however, only a relatively cursory examination of these features was possible. Secondly the paper presents some thoughts on the difficulties associated with the analytic---in particular ethnographic---study of code, drawing on general problems as well as issues arising from the difficulties and failings encountered as part of the analysis presented in the first section.
Resumo:
The continuous flow of technological developments in communications and electronic industries has led to the growing expansion of the Internet of Things (IoT). By leveraging the capabilities of smart networked devices and integrating them into existing industrial, leisure and communication applications, the IoT is expected to positively impact both economy and society, reducing the gap between the physical and digital worlds. Therefore, several efforts have been dedicated to the development of networking solutions addressing the diversity of challenges associated with such a vision. In this context, the integration of Information Centric Networking (ICN) concepts into the core of IoT is a research area gaining momentum and involving both research and industry actors. The massive amount of heterogeneous devices, as well as the data they produce, is a significant challenge for a wide-scale adoption of the IoT. In this paper we propose a service discovery mechanism, based on Named Data Networking (NDN), that leverages the use of a semantic matching mechanism for achieving a flexible discovery process. The development of appropriate service discovery mechanisms enriched with semantic capabilities for understanding and processing context information is a key feature for turning raw data into useful knowledge and ensuring the interoperability among different devices and applications. We assessed the performance of our solution through the implementation and deployment of a proof-of-concept prototype. Obtained results illustrate the potential of integrating semantic and ICN mechanisms to enable a flexible service discovery in IoT scenarios.
Resumo:
The present article reflects the progress of an ongoing master’s dissertation on language engineering. The main goal of the work here described, is to infer a programmer’s profile through the analysis of his source code. After such analysis the programmer shall be placed on a scale that characterizes him on his language abilities. There are several potential applications for such profiling, namely, the evaluation of a programmer’s skills and proficiency on a given language or the continuous evaluation of a student’s progress on a programming course. Throughout the course of this project and as a proof of concept, a tool that allows the automatic profiling of a Java programmer is under development. This tool is also introduced in the paper and its preliminary outcomes are discussed.
Resumo:
During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.
Resumo:
The present work aims to allow developers to implement small features on a certain Android application in a fast and easy manner, as well as provide their users to install them ondemand, i.e., they can install the ones they are interested in. These small packages of features are called plugins, and the chosen development language to develop these in was JavaScript. In order to achieve that, an Android framework was developed that enables the host application to install, manage and run these plugins at runtime. This framework was designed to have a very clean and almost readable API, which allowed for better code organization and maintainability. The implementation used the Google’s engine “V8” to interpret the JavaScript code and through a set of JNI calls made that code call certain Android methods previously registered in the runtime. In order to test the framework, it was integrated with the client’s communication application RCS+ using two plugins developed alongside the framework. Although these plugins had only the more common requirements, they were proven to work successfully as intended. Concluding, the framework although successful made it clear that this kind of development through a non-native API has its set of difficulties especially regarding the implementation of complex features.
Resumo:
Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506
Resumo:
Heavy Liquid Metal Cooled Reactors are among the concepts, fostered by the GIF, as potentially able to comply with stringent safety, economical, sustainability, proliferation resistance and physical protection requirements. The increasing interest around these innovative systems has highlighted the lack of tools specifically dedicated to their core design stage. The present PhD thesis summarizes the three years effort of, partially, closing the mentioned gap, by rationally defining the role of codes in core design and by creating a development methodology for core design-oriented codes (DOCs) and its subsequent application to the most needed design areas. The covered fields are, in particular, the fuel assembly thermal-hydraulics and the fuel pin thermo-mechanics. Regarding the former, following the established methodology, the sub-channel code ANTEO+ has been conceived. Initially restricted to the forced convection regime and subsequently extended to the mixed one, ANTEO+, via a thorough validation campaign, has been demonstrated a reliable tool for design applications. Concerning the fuel pin thermo-mechanics, the will to include safety-related considerations at the outset of the pin dimensioning process, has given birth to the safety-informed DOC TEMIDE. The proposed DOC development methodology has also been applied to TEMIDE; given the complex interdependence patterns among the numerous phenomena involved in an irradiated fuel pin, to optimize the code final structure, a sensitivity analysis has been performed, in the anticipated application domain. The development methodology has also been tested in the verification and validation phases; the latter, due to the low availability of experiments truly representative of TEMIDE's application domain, has only been a preliminary attempt to test TEMIDE's capabilities in fulfilling the DOC requirements upon which it has been built. In general, the capability of the proposed development methodology for DOCs in delivering tools helping the core designer in preliminary setting the system configuration has been proven.
Resumo:
The aim of the Ph.D. research project was to explore Dual Fuel combustion and hybridization. Natural gas-diesel Dual Fuel combustion was experimentally investigated on a 4-Stroke, 2.8 L, turbocharged, light-duty Diesel engine, considering four operating points in the range between low to medium-high loads at 3000 rpm. Then, a numerical analysis was carried out using a customized version of the KIVA-3V code, in order to optimize the diesel injection strategy of the highest investigated load. A second KIVA-3V model was used to analyse the interchangeability between natural gas and biogas on an intermediate operating point. Since natural gas-diesel Dual Fuel combustion suffers from poor combustion efficiency at low loads, the effects of hydrogen enriched natural gas on Dual Fuel combustion were investigated using a validated Ansys Forte model, followed by an optimization of the diesel injection strategy and a sensitivity analysis to the swirl ratio, on the lowest investigated load. Since one of the main issues of Low Temperature Combustion engines is the low power density, 2-Stroke engines, thanks to the double frequency compared to 4-Stroke engines, may be more suitable to operate in Dual Fuel mode. Therefore, the application of gasoline-diesel Dual Fuel combustion to a modern 2-Stroke Diesel engine was analysed, starting from the investigation of gasoline injection and mixture formation. As far as hybridization is concerned, a MATLAB-Simulink model was built to compare a conventional (combustion) and a parallel-hybrid powertrain applied to a Formula SAE race car.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Nuclear cross sections are the pillars onto which the transport simulation of particles and radiations is built on. Since the nuclear data libraries production chain is extremely complex and made of different steps, it is mandatory to foresee stringent verification and validation procedures to be applied to it. The work here presented has been focused on the development of a new python based software called JADE, whose objective is to give a significant help in increasing the level of automation and standardization of these procedures in order to reduce the time passing between new libraries releases and, at the same time, increasing their quality. After an introduction to nuclear fusion (which is the field where the majority of the V\&V action was concentrated for the time being) and to the simulation of particles and radiations transport, the motivations leading to JADE development are discussed. Subsequently, the code general architecture and the implemented benchmarks (both experimental and computational) are described. After that, the results coming from the major application of JADE during the research years are presented. At last, after a final discussion on the objective reached by JADE, the possible brief, mid and long time developments for the project are discussed.