896 resultados para software quality metrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last decades, we assisted to what is called “information explosion”. With the advent of the new technologies and new contexts, the volume, velocity and variety of data has increased exponentially, becoming what is known today as big data. Among them, we emphasize telecommunications operators, which gather, using network monitoring equipment, millions of network event records, the Call Detail Records (CDRs) and the Event Detail Records (EDRs), commonly known as xDRs. These records are stored and later processed to compute network performance and quality of service metrics. With the ever increasing number of collected xDRs, its generated volume needing to be stored has increased exponentially, making the current solutions based on relational databases not suited anymore. To tackle this problem, the relational data store can be replaced by Hadoop File System (HDFS). However, HDFS is simply a distributed file system, this way not supporting any aspect of the relational paradigm. To overcome this difficulty, this paper presents a framework that enables the current systems inserting data into relational databases, to keep doing it transparently when migrating to Hadoop. As proof of concept, the developed platform was integrated with the Altaia - a performance and QoS management of telecommunications networks and services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded software systems in vehicles are of rapidly increasing commercial importance for the automotive industry. Current systems employ a static run-time environment; due to the difficulty and cost involved in the development of dynamic systems in a high-integrity embedded control context. A dynamic system, referring to the system configuration, would greatly increase the flexibility of the offered functionality and enable customised software configuration for individual vehicles, adding customer value through plug-and-play capability, and increased quality due to its inherent ability to adjust to changes in hardware and software. We envisage an automotive system containing a variety of components, from a multitude of organizations, not necessarily known at development time. The system dynamically adapts its configuration to suit the run-time system constraints. This paper presents our vision for future automotive control systems that will be regarded in an EU research project, referred to as DySCAS (Dynamically Self-Configuring Automotive Systems). We propose a self-configuring vehicular control system architecture, with capabilities that include automatic discovery and inclusion of new devices, self-optimisation to best-use the processing, storage and communication resources available, self-diagnostics and ultimately self-healing. Such an architecture has benefits extending to reduced development and maintenance costs, improved passenger safety and comfort, and flexible owner customisation. Specifically, this paper addresses the following issues: The state of the art of embedded software systems in vehicles, emphasising the current limitations arising from fixed run-time configurations; and the benefits and challenges of dynamic configuration, giving rise to opportunities for self-healing, self-optimisation, and the automatic inclusion of users’ Consumer Electronic (CE) devices. Our proposal for a dynamically reconfigurable automotive software system platform is outlined and a typical use-case is presented as an example to exemplify the benefits of the envisioned dynamic capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New technologies appear each moment and its use can result in countless benefits for that they directly use and for all the society as well. In this direction, the State also can use the technologies of the information and communication to improve the level of rendering of services to the citizens, to give more quality of life to the society and to optimize the public expense, centering it in the main necessities. For this, it has many research on politics of Electronic Government (e-Gov) and its main effect for the citizen and the society as a whole. This research studies the concept of Electronic Government and wishes to understand the process of implementation of Free Softwares in the agencies of the Direct Administration in the Rio Grande do Norte. Moreover, it deepens the analysis to identify if its implantation results in reduction of cost for the state treasury and intends to identify the Free Software participation in the Administration and the bases of the politics of Electronic Government in this State. Through qualitative interviews with technologies coordinators and managers in 3 State Secretaries it could be raised the ways that come being trod for the Government in order to endow the State with technological capacity. It was perceived that the Rio Grande do Norte still is an immature State in relation to practical of electronic government (e-Gov) and with Free Softwares, where few agencies have factual and viable initiatives in this area. It still lacks of a strategical definition of the paper of Technology and more investments in infrastructure of staff and equipment. One also observed advances as the creation of the normative agency, the CETIC (State Advice of Technology of the Information and Communication), the Managing Plan of Technology that provide a necessary diagnosis with the situation how much Technology in the State and considered diverse goals for the area, the accomplishment of a course of after-graduation for managers of Technology and the training in BrOffice (OppenOffice) for 1120 public servers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recommendation for Oxygen Measurements from Argo Floats: Implementation of In-Air-Measurement Routine to Assure Highest Long-term Accuracy As Argo has entered its second decade and chemical/biological sensor technology is improving constantly, the marine biogeochemistry community is starting to embrace the successful Argo float program. An augmentation of the global float observatory, however, has to follow rather stringent constraints regarding sensor characteristics as well as data processing and quality control routines. Owing to the fairly advanced state of oxygen sensor technology and the high scientific value of oceanic oxygen measurements (Gruber et al., 2010), an expansion of the Argo core mission to routine oxygen measurements is perhaps the most mature and promising candidate (Freeland et al., 2010). In this context, SCOR Working Group 142 “Quality Control Procedures for Oxygen and Other Biogeochemical Sensors on Floats and Gliders” (www.scor-int.org/SCOR_WGs_WG142.htm) set out in 2014 to assess the current status of biogeochemical sensor technology with particular emphasis on float-readiness, develop pre- and post-deployment quality control metrics and procedures for oxygen sensors, and to disseminate procedures widely to ensure rapid adoption in the community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of Quality of Life (Qol) has been conducted on various scales throughout the years with focus on assessing overall quality of living amongst citizens. The main focus in these studies have been on economic factors, with the purpose of creating a Quality of Life Index (QLI).When it comes down to narrowing the focus to the environment and factors like Urban Green Spaces (UGS) and air quality the topic gets more focused on pointing out how each alternative meets this certain criteria. With the benefits of UGS and a healthy environment in focus a new Environmental Quality of Life Index (EQLI) will be proposed by incorporating Multi Criteria Analysis (MCA) and Geographical Information Systems (GIS). Working with MCA on complex environmental problems and incorporating it with GIS is a challenging but rewarding task, and has proven to be an efficient approach among environmental scientists. Background information on three MCA methods will be shown: Analytical Hierarchy Process (AHP), Regime Analysis and PROMETHEE. A survey based on a previous study conducted on the status of UGS within European cities was sent to 18 municipalities in the study area. The survey consists of evaluating the current status of UGS as well as planning and management of UGS with in municipalities for the purpose of getting criteria material for the selected MCA method. The current situation of UGS is assessed with use of GIS software and change detection is done on a 10 year period using NDVI index for comparison purposes to one of the criteria in the MCA. To add to the criteria, interpolation of nitrogen dioxide levels was performed with ordinary kriging and the results transformed into indicator values. The final outcome is an EQLI map with indicators of environmentally attractive municipalities with ranking based on predefinedMCA criteria using PROMETHEE I pairwise comparison and PROMETHEE II complete ranking of alternatives. The proposed methodology is applied to Lisbon’s Metropolitan Area, Portugal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the potentially polluting economic activities that compromise the quality of groundwater are the gas stations. The city of Natal has about 120 gas stations, of which only has an environmental license for operation. Discontinuities in the offices were notified by the Public Ministry of Rio Grande do Norte to carry out the environmental adaptations, among which is the investigation of environmental liabilities. The preliminary and confirmatory stages of this investigation consisted in the evaluation of soil gas surveys with two confirmatory chemical analysis of BTEX, PAH and TPH. To get a good evaluation and interpretation of results obtained in the field, it became necessary three-dimensional representation of them. We used a CAD software to graph the equipment installed in a retail service station fuel in Natal, as well as the plumes of contamination by volatile organic compounds. The tool was concluded that contamination is not located in the current system of underground storage of fuel development, but reflects the historical past in which tanks were removed not tight gasoline and diesel

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context Over the past 50 years numerous studies have investigated the possible effect that software engineers' personalities may have upon their individual tasks and teamwork. These have led to an improved understanding of that relationship; however, the analysis of personality traits and their impact on the software development process is still an area under investigation and debate. Further, other than personality traits, "team climate" is also another factor that has also been investigated given its relationship with software teams' performance. Objective The aim of this paper is to investigate how software professionals' personality is associated with team climate and team performance. Method In this paper we detail a Systematic Literature Review (SLR) of the effect of software engineers' personality traits and team climate on software team performance. Results Our main findings include 35 primary studies that have addressed the relationship between personality and team performance without considering team climate. The findings showed that team climate comprises a wide range of factors that fall within the fields of management and behavioral sciences. Most of the studies used undergraduate students as subjects and as surrogates of software professionals. Conclusions The findings from this SLR would be beneficial for understanding the personality assessment of software development team members by revealing the traits of personality taxonomy, along with the measurement of the software development team working environment. These measurements would be useful in examining the success and failure possibilities of software projects in development processes. General terms Human factors, performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microgrid (MG) power system plays an important role to fulfill reliable and secure energy supply to critical loads of communities as well as for communities in remote area. Distributed Generation (DG) sources integrated in a MG provides numerous benefits, at the same time leads to power quality issues in the MG power distribution network. Power Quality (PQ) issue arises due to the integration of an intermittent nature of Renewable Energy (RE) sources with advanced Power Electronics (PE) converter technology. Also, presence of non-linear and unbalancing loads in MG seems to affect PQ of the energy supply in power distribution network. In this paper, PQ impacts like; power variation, voltage variation, Total Harmonic Distortion (THD), and Unbalance voltage level have been analysed in Low Voltage (LV) distribution network of typical MG power system model. In this study, development of MG model and PQ impact analysis through simulation has been done in PSS-Sincal software environment. Analysis results from the study can be used as a guideline for developing a real and independent MG power system with improved PQ conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessment processes are essential to guarantee quality and continuous improvement of software in healthcare, as they measure software attributes in their lifecycle, verify the degree of alignment between the software and its objectives and identify unpredicted events. This article analyses the use of an assessment model based on software metrics for three healthcare information systems from a public hospital that provides secondary and tertiary care in the region of Ribeirão Preto. Compliance with the metrics was investigated using questionnaires in guided interviews of the system analysts responsible for the applications. The outcomes indicate that most of the procedures specified in the model can be adopted to assess the systems that serves the organization, particularly in the attributes of compatibility, reliability, safety, portability and usability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Custom cranio-orbital implants have been shown to achieve better performance than their hand-shaped counterparts by restoring skull anatomy more accurately and by reducing surgery time. Designing a custom implant involves reconstructing a model of the patient's skull using their computed tomography (CT) scan. The healthy side of the skull model, contralateral to the damaged region, can then be used to design an implant plan. Designing implants for areas of thin bone, such as the orbits, is challenging due to poor CT resolution of bone structures. This makes preoperative design time-intensive since thin bone structures in CT data must be manually segmented. The objective of this thesis was to research methods to accurately and efficiently design cranio-orbital implant plans, with a focus on the orbits, and to develop software that integrates these methods. Methods: The software consists of modules that use image and surface restoration approaches to enhance both the quality of CT data and the reconstructed model. It enables users to input CT data, and use tools to output a skull model with restored anatomy. The skull model can then be used to design the implant plan. The software was designed using 3D Slicer, an open-source medical visualization platform. It was tested on CT data from thirteen patients. Results: The average time it took to create a skull model with restored anatomy using our software was 0.33 hours ± 0.04 STD. In comparison, the design time of the manual segmentation method took between 3 and 6 hours. To assess the structural accuracy of the reconstructed models, CT data from the thirteen patients was used to compare the models created using our software with those using the manual method. When registering the skull models together, the difference between each set of skulls was found to be 0.4 mm ± 0.16 STD. Conclusions: We have developed a software to design custom cranio-orbital implant plans, with a focus on thin bone structures. The method described decreases design time, and is of similar accuracy to the manual method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The telecommunication industry is entering a new era. The increased traffic demands imposed by the huge number of always-on connections require a quantum leap in the field of enabling techniques. Furthermore, subscribers expect ever increasing quality of experience with its joys and wonders, while network operators and service providers aim for cost-efficient networks. These requirements require a revolutionary change in the telecommunications industry, as shown by the success of virtualization in the IT industry, which is now driving the deployment and expansion of cloud computing. Telecommunications providers are currently rethinking their network architecture from one consisting of a multitude of black boxes with specialized network hardware and software to a new architecture consisting of “white box” hardware running a multitude of specialized network software. This network software may be data plane software providing network functions virtualization (NVF) or control plane software providing centralized network management — software defined networking (SDN). It is expected that these architectural changes will permeate networks as wide ranging in size as the Internet core networks, to metro networks, to enterprise networks and as wide ranging in functionality as converged packet-optical networks, to wireless core networks, to wireless radio access networks.