896 resultados para software quality metrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel, simple and accurate fingerprint method was developed using high-performance liquid chromatography-photodiode array detection (HPLC-DAD) for the quality control of Qianghuo, a Tibetan folk and Chinese herbal medicine used as a diaphoretic, an antifebrile and an anodyne. For the first time, the feasibility and advantages of employing chromatographic fingerprint were investigated and demonstrated for the evaluation of Qianghuo by systematically comparing chromatograms of aqueous extracts with the professional analytical software recommended by State Food and Drug Administration (SFDA). Our results revealed that the chromatographic fingerprint combing similarity evaluation could efficiently identify and distinguish raw herbs of Qianghno from different sources and different species. The effects on Notopterygium forbesii Boiss (Apiaceae) chromatographic fingerprints resulted from collecting locations, harvesting time were also examined. (c) 2006 Elsevier lrelanc Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Administrative or quality improvement registries may or may not contain the elements needed for investigations by trauma researchers. International Classification of Diseases Program for Injury Categorisation (ICDPIC), a statistical program available through Stata, is a powerful tool that can extract injury severity scores from ICD-9-CM codes. We conducted a validation study for use of the ICDPIC in trauma research. METHODS: We conducted a retrospective cohort validation study of 40,418 patients with injury using a large regional trauma registry. ICDPIC-generated AIS scores for each body region were compared with trauma registry AIS scores (gold standard) in adult and paediatric populations. A separate analysis was conducted among patients with traumatic brain injury (TBI) comparing the ICDPIC tool with ICD-9-CM embedded severity codes. Performance in characterising overall injury severity, by the ISS, was also assessed. RESULTS: The ICDPIC tool generated substantial correlations in thoracic and abdominal trauma (weighted κ 0.87-0.92), and in head and neck trauma (weighted κ 0.76-0.83). The ICDPIC tool captured TBI severity better than ICD-9-CM code embedded severity and offered the advantage of generating a severity value for every patient (rather than having missing data). Its ability to produce an accurate severity score was consistent within each body region as well as overall. CONCLUSIONS: The ICDPIC tool performs well in classifying injury severity and is superior to ICD-9-CM embedded severity for TBI. Use of ICDPIC demonstrates substantial efficiency and may be a preferred tool in determining injury severity for large trauma datasets, provided researchers understand its limitations and take caution when examining smaller trauma datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a generalisation of the k-nearest neighbour (k-NN) retrieval method based on an error function using distance metrics in the solution and problem space. It is an interpolative method which is proposed to be effective for sparse case bases. The method applies equally to nominal, continuous and mixed domains, and does not depend upon an embedding n-dimensional space. In continuous Euclidean problem domains, the method is shown to be a generalisation of the Shepard's Interpolation method. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The performance of the retrieval method is examined with reference to the Iris classification problem,and to a simulated sparse nominal value test problem. The introducion of a solution-space metric is shown to out-perform conventional nearest neighbours methods on sparse case bases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a case base reduction technique which uses a metric defined on the solution space. The technique utilises the Generalised Shepard Nearest Neighbour (GSNN) algorithm to estimate nominal or real valued solutions in case bases with solution space metrics. An overview of GSNN and a generalised reduction technique, which subsumes some existing decremental methods, such as the Shrink algorithm, are presented. The reduction technique is given for case bases in terms of a measure of the importance of each case to the predictive power of the case base. A trial test is performed on two case bases of different kinds, with several metrics proposed in the solution space. The tests show that GSNN can out-perform standard nearest neighbour methods on this set. Further test results show that a caseremoval order proposed based on a GSNN error function can produce a sparse case base with good predictive power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines different ways for measuring similarity between software design models for the purpose of software reuse. Current approaches to this problem are discussed and a set of suitable similarity metrics are proposed and evaluated. Work on the optimisation of weights to increase the competence of a CBR system is presented. A graph matching algorithm and associated metrics capturing the structural similarity between UML class diagrams is presented and demonstrated through an example case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The parallelization of real-world compute intensive Fortran application codes is generally not a trivial task. If the time to complete the parallelization is to be significantly reduced then an environment is needed that will assist the programmer in the various tasks of code parallelization. In this paper the authors present a code parallelization environment where a number of tools that address the main tasks such as code parallelization, debugging and optimization are available. The ParaWise and CAPO parallelization tools are discussed which enable the near automatic parallelization of real-world scientific application codes for shared and distributed memory-based parallel systems. As user involvement in the parallelization process can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform nearly automatic relative debugging of a program that has been parallelized using the tools. A high quality interprocedural dependence analysis as well as user-tool interaction are also highlighted and are vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of benchmark and real-world application codes parallelized are presented and show the benefits of using the environment

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines different ways of measuring similarity between software design models for Case Based Reasoning (CBR) to facilitate reuse of software design and code. The paper considers structural and behavioural aspects of similarity between software design models. Similarity metrics for comparing static class structures are defined and discussed. A Graph representation of UML class diagrams and corresponding similarity measures for UML class diagrams are defined. A full search graph matching algorithm for measuring structural similarity diagrams based on the identification of the Maximum Common Sub-graph (MCS) is presented. Finally, a simple evaluation of the approach is presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is believed to be the first documented account of a full adoption of lean by a software company. Lean techniques were devised by Toyota and other manufacturers over the last 50 years. The techniques are termed lean because they require less resource to produce more product and exceptional quality. Lean ideas have also been successful in service industries and product development. Applying lean to software has been advocated for over 10 years. Timberline, Inc started their lean initiative in Spring 2001 and this paper records their journey, results and lessons learned up to Fall 2003. This case study demonstrates that lean thinking can work successfully for software developers. It also indicates that the extensive lean literature is a valuable source of new ideas for software engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changes to software requirements occur during initial development and subsequent to delivery, posing a risk to cost and quality while at the same time providing an opportunity to add value. Provision of a generic change source taxonomy will support requirements change risk visibility, and also facilitate richer recording of both pre- and post-delivery change data. In this paper we present a collaborative study to investigate and classify sources of requirements change, drawing comparison between those pertaining to software development and maintenance. We begin by combining evolution, maintenance and software lifecycle research to derive a definition of software maintenance, which provides the foundation for empirical context and comparison. Previously published change ‘causes’ pertaining to development are elicited from the literature, consolidated using expert knowledge and classified using card sorting. A second study incorporating causes of requirements change during software maintenance results in a taxonomy which accounts for the entire evolutionary progress of applications software. We conclude that the distinction between the terms maintenance and development is imprecise, and that changes to requirements in both scenarios arise due to a combination of factors contributing to requirements uncertainty and events that trigger change. The change trigger taxonomy constructs were initially validated using a small set of requirements change data, and deemed sufficient and practical as a means to collect common requirements change statistics across multiple projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel Service Level Agreement (SLA)-driven service provisioning architecture, which enables dynamic and flexible bandwidth reservation schemes on a per-user or per-application basis. Various session level SLA negotiation schemes involving bandwidth allocation, service start time and service duration parameters are introduced and analyzed. The results show that these negotiation schemes can be utilized for the benefit of both end users and network providers in achieving the highest individual SLA optimization in terms of key Quality of Service (QoS) metrics and price. The inherent characteristics of software agents such as autonomy, adaptability and social abilities offer many advantages in this dynamic, complex, and distributed network environment especially when performing Service Level Agreements (SLA) definition negotiations and brokering tasks. This article also presents a service broker prototype based on Fujitsu's Phoenix Open Agent Mediator (OAM) agent technology, which was used to demonstrate a range of SLA brokering scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud services are exploding, and organizations are converging their data centers in order to take advantage of the predictability, continuity, and quality of service delivered by virtualization technologies. In parallel, energy-efficient and high-security networking is of increasing importance. Network operators, and service and product providers require a new network solution to efficiently tackle the increasing demands of this changing network landscape. Software-defined networking has emerged as an efficient network technology capable of supporting the dynamic nature of future network functions and intelligent applications while lowering operating costs through simplified hardware, software, and management. In this article, the question of how to achieve a successful carrier grade network with software-defined networking is raised. Specific focus is placed on the challenges of network performance, scalability, security, and interoperability with the proposal of potential solution directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. In particular, a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation (4G). 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigms). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications (i.e. YouTube and Skype) to be available in the near future. Therefore, 4G wireless communications system will be of paramount importance on the development of the information society in the near future. As 4G wireless services will continue to increase, this will put more and more pressure on the spectrum availability. There is a worldwide recognition that methods of spectrum managements have reached their limit and are no longer optimal, therefore new paradigms must be sought. Studies show that most of the assigned spectrum is under-utilized, thus the problem in most cases is inefficient spectrum management rather spectrum shortage. There are currently trends towards a more liberalized approach of spectrum management, which are tightly linked to what is commonly termed as Cognitive Radio (CR). Furthermore, conventional deployment of 4G wireless systems (one BS in cell and mobile deploy around it) are known to have problems in providing fairness (users closer to the BS are more benefited relatively to the cell edge users) and in covering some zones affected by shadowing, therefore the use of relays has been proposed as a solution. To evaluate and analyse the performances of 4G wireless systems software tools are normally used. Software tools have become more and more mature in recent years and their need to provide a high level evaluation of proposed algorithms and protocols is now more important. The system level simulation (SLS) tools provide a fundamental and flexible way to test all the envisioned algorithms and protocols under realistic conditions, without the need to deal with the problems of live networks or reduced scope prototypes. Furthermore, the tools allow network designers a rapid collection of a wide range of performance metrics that are useful for the analysis and optimization of different algorithms. This dissertation proposes the design and implementation of conventional system level simulator (SLS), which afterwards enhances for the 4G wireless technologies namely cognitive Radios (IEEE802.22) and Relays (IEEE802.16j). SLS is then used for the analysis of proposed algorithms and protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

No panorama atual do desenvolvimento de software educativo é importante que os processos de desenvolvimento sejam adequados e compatíveis com o contexto em que serão utilizados este tipo de recursos. Desta forma, é importante melhorar continuamente os processos de desenvolvimento bem como se proceder à avaliação de forma a garantir a sua qualidade e viabilidade económica. Este estudo propõe uma Metodologia Híbrida de Desenvolvimento Centrado no Utilizador (MHDCU) aplicada ao software educativo. Trata-se de um processo de desenvolvimento simples, iterativo e incremental que tem como “alicerces” princípios do Design Centrado no Utilizador, especificados na International Organization for Standardization - ISO 13407. Na sua base encontra-se a estrutura disciplinada de processos de desenvolvimento, bem como práticas e valores dos métodos ágeis de desenvolvimento de software. O processo é constituído por 4 fases principais: planeamento (guião didático), design (storyboard), implementação e manutenção/operação. A prototipagem e a avaliação são realizadas de modo transversal a todo o processo. A metodologia foi implementada numa Pequena e Média Empresa de desenvolvimento de recursos educacionais, com o objetivo de desenvolver recursos educacionais com qualidade reconhecida e simultaneamente viáveis do ponto de vista económico. O primeiro recurso que teve por base a utilização desta metodologia foi o Courseware Sere – “O Ser Humano e os Recursos Naturais”. O trabalho seguiu uma metodologia de investigação & desenvolvimento, de natureza mista, em que se pretendeu descrever e analisar/avaliar uma metodologia de desenvolvimento de software educativo, i.e., o processo, bem como o produto final. O estudo é fundamentalmente descritivo e exploratório. A metodologia de desenvolvimento do software (primeira questão de investigação) foi proposta, essencialmente, com base na revisão integrativa da literatura da especialidade e com base nos resultados que emergiram das Fases 2 e 3. Do ponto de vista exploratório, foi avaliado, por um lado, o potencial técnico e didático da 1ª versão do software inserido no Courseware Sere (segunda questão de investigação), e, por outro lado, analisar os pontos fortes e as fragilidades da metodologia utilizada para o seu desenvolvimento (terceira questão de investigação). Como técnicas de recolha de dados recorreu-se a dois inquéritos por questionário e à observação direta participante (mediada pela plataforma moodle). Quanto às técnicas de análise de dados optou-se pela análise estatística descritiva e pela análise de conteúdo. Os resultados indicam que o recurso desenvolvido possui qualidade técnica e didática. Relativamente a análise da Metodologia Híbrida de desenvolvimento Centrado no Utilizador foram propostas algumas melhorias relacionadas com o envolvimento do utilizador e introdução de novos métodos. Apesar de identificadas algumas limitações, este projeto permitiu que a empresa melhorasse significativamente os processos de desenvolvimento de recursos (mesmo os que não são informatizados), bem como permitiu o aumento do seu portefólio com o desenvolvimento do Courseware Sere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flexible radio transmitters based on the Software-Defined Radio (SDR) concept are gaining an increased research importance due to the unparalleled proliferation of new wireless standards operating at different frequencies, using dissimilar coding and modulation schemes, and targeted for different ends. In this new wireless communications paradigm, the physical layer of the radio transmitter must be able to support the simultaneous transmission of multi-band, multi-rate, multi-standard signals, which in practice is very hard or very inefficient to implement using conventional approaches. Nevertheless, the last developments in this field include novel all-digital transmitter architectures where the radio datapath is digital from the baseband up to the RF stage. Such concept has inherent high flexibility and poses an important step towards the development of SDR-based transmitters. However, the truth is that implementing such radio for a real world communications scenario is a challenging task, where a few key limitations are still preventing a wider adoption of this concept. This thesis aims exactly to address some of these limitations by proposing and implementing innovative all-digital transmitter architectures with inherent higher flexibility and integration, and where improving important figures of merit, such as coding efficiency, signal-to-noise ratio, usable bandwidth and in-band and out-of-band noise will also be addressed. In the first part of this thesis, the concept of transmitting RF data using an entirely digital approach based on pulsed modulation is introduced. A comparison between several implementation technologies is also presented, allowing to state that FPGAs provide an interesting compromise between performance, power efficiency and flexibility, thus making them an interesting choice as an enabling technology for pulse-based all-digital transmitters. Following this discussion, the fundamental concepts inherent to pulsed modulators, its key advantages, main limitations and typical enhancements suitable for all-digital transmitters are also presented. The recent advances regarding the two most common classes of pulse modulated transmitters, namely the RF and the baseband-level are introduced, along with several examples of state-of-the-art architectures found on the literature. The core of this dissertation containing the main developments achieved during this PhD work is then presented and discussed. The first key contribution to the state-of-the-art presented here consists in the development of a novel ΣΔ-based all-digital transmitter architecture capable of multiband and multi-standard data transmission in a very flexible and integrated way, where the pulsed RF output operating in the microwave frequency range is generated inside a single FPGA device. A fundamental contribution regarding the simultaneous transmission of multiple RF signals is then introduced by presenting and describing novel all-digital transmitter architectures that take advantage of multi-gigabit data serializers available on current high-end FPGAs in order to transmit in a time-interleaved approach multiple independent RF carriers. Further improvements in this design approach allowed to provide a two-stage up-conversion transmitter architecture enabling the fine frequency tuning of concurrent multichannel multi-standard signals. Finally, further improvements regarding two key limitations inherent to current all-digital transmitter approaches are then addressed, namely the poor coding efficiency and the combined high quality factor and tunability requirements of the RF output filter. The followed design approach based on poliphase multipath circuits allowed to create a new FPGA-embedded agile transmitter architecture that significantly improves important figures of merit, such as coding efficiency and SNR, while maintains the high flexibility that is required for supporting multichannel multimode data transmission.