999 resultados para Testes : Software
Resumo:
Object-oriented programming is a widely adopted paradigm for desktop software development. This paradigm partitions software into separate entities, objects, which consist of data and related procedures used to modify and inspect it. The paradigm has evolved during the last few decades to emphasize decoupling between object implementations, via means such as explicit interface inheritance and event-based implicit invocation. Inter-process communication (IPC) technologies allow applications to interact with each other. This enables making software distributed across multiple processes, resulting in a modular architecture with benefits in resource sharing, robustness, code reuse and security. The support for object-oriented programming concepts varies between IPC systems. This thesis is focused on the D-Bus system, which has recently gained a lot of users, but is still scantily researched. D-Bus has support for asynchronous remote procedure calls with return values and a content-based publish/subscribe event delivery mechanism. In this thesis, several patterns for method invocation in D-Bus and similar systems are compared. The patterns that simulate synchronous local calls are shown to be dangerous. Later, we present a state-caching proxy construct, which avoids the complexity of properly asynchronous calls for object inspection. The proxy and certain supplementary constructs are presented conceptually as generic object-oriented design patterns. The e ect of these patterns on non-functional qualities of software, such as complexity, performance and power consumption, is reasoned about based on the properties of the D-Bus system. The use of the patterns reduces complexity, but maintains the other qualities at a good level. Finally, we present currently existing means of specifying D-Bus object interfaces for the purposes of code and documentation generation. The interface description language used by the Telepathy modular IM/VoIP framework is found to be an useful extension of the basic D-Bus introspection format.
Resumo:
Open source and open source software development have been interesting phenomena during the past decade. Traditional business models do not apply with open source, where the actual product is free. However, it is possible to make business with open source, even successfully, but the question is: how? The aim of this study is to find the key factors of successfully making business out of commercial open source software development. The task is achieved by finding the factors that influence open source projects, finding the relation between those factors, and find out why some factors explain the success more than others. The literature review concentrates first on background of open innovation, open source and open source software. Then business models, critical success factors and success measures are examined. Based on existing literature a framework was created. The framework contains categorized success factors that influence software projects in general as well as open source software projects. The main categories of success factors in software business are divided into community management, technology management, project management and market management. In order to find out which of the factors based on the existing literature are the most critical, empirical research was done by conducting unstructured personal interviews. The main finding based on the interviews is that the critical success factors in open source software business do not differ from those in traditional software business or in fact from those in any other business. Some factors in the framework came out in the interviews that can be considered as key factors: establishing and communicating hierarchy (community management), localization (technology management), good license know-how and IPR management (project management), and effective market management (market management). The critical success factors according to the interviewees are not listed in the framework: low price, good product and good business model development.
Resumo:
Este artigo tem o objetivo de realizar uma revisão narrativa sobre revisão sistemática da acurácia dos testes diagnósticos. Foi realizada busca na Cochrane Methodology Reviews (Cochrane Reviews of Diagnostic Test Accuracy), Medline e LILACS, bem como busca manual das listas de referências dos artigos incluídos na revisão. As estratégias de busca empregadas foram as seguintes, empregando-se títulos de assuntos e termos livres: 1. na Cochrane Methodology Reviews: accuracy study "Methodology" 2. Na Pubmed "Meta-Analysis "[Publication Type] AND "Evidence-Based Medicine"[Mesh]) AND "Sensitivity and Specificity"[Mesh]; 3. Na LILACS: (revisao sistematica) or "literatura de REVISAO como assunto" [Descritor de assunto] and (sistematica) or "SISTEMATICA" [Descritor de assunto] and (acuracia) or "SENSIBILIDADE e especificidade" [Descritor de assunto]. Em suma, a preparação e o planejamento metodológicos das revisões sistemáticas de testes diagnósticos é ulterior àqueles empregados nas revisões sistemáticas das intervenções terapêuticas. Há muitas fontes de heterogeneidade nos desenhos dos estudos de teste diagnóstico, o que dificulta muito a síntese - metanálise - dos seus resultados. Para contornar esse problema, existem atualmente normas, exigidas pelas principais revistas biomédicas, para a submissão de um manuscrito sobre testes diagnósticos.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
EasyLEED is a program designed for the extraction of intensity-energy spectra from low-energy electron diffraction patterns. It can be used to get information about the position of individual atoms on a surface of some substance. The goal of this thesis is to make easyLEED useful in LEED-research. It is achieved by adding new features, i.e. plotting intensity-energy spectra, setting tracking parameters and allowing exporting and importing of settings and spot location data, to the program. The detailed description of these added features and how they’re done and how they impact on the usefulness of the program in research are presented in this thesis. Improving the calculational part of the program is not discussed.
Resumo:
Scrum is an agile project management approach that has been widely practiced in the software development projects. It has proven to increase quality, productivity, customer satisfaction, transparency and team morale among other benefits from its implementation. The concept of scrum is based on the concepts of incremental innovation strategies, lean manufacturing, kaizen, iterative development and so on and is usually contrasted with the linear development models such as the waterfall method in the software industry. The traditional approaches to project management such as the waterfall method imply intensive upfront planning and approval of the entire project. These sort of approaches work well in the well-defined stable environments where all the specifications of the project are known in the beginning. However, in the uncertain environments when a project requires continuous development and incorporation of new requirements, they do not tend to work well. The scrum framework was inspiraed by Nonaka’s article about new product developement and was later adopted by software development practitioners. This research explores conditions for and benefits of the application of scrum framework beyond software development projects. There are currently a few case studies on the scrum implementation in non-software projects, but there is a noticeable trend of it in the scrum practitioners’ community. The research is based on the real-life context multiple case study analysis of three different non-software projects. The results of the research showed that in order to succeed within scrum projects need to satisfy certain conditions – necessary and sufficient. Among them the key factors are uncertainty of the project environment, not well defined outcomes, commitment of the scrum teams and management support. The top advantages of scrum implementation identified in the present research include improved transparency, accountability, team morale, communications, cooperation and collaboration. Further researches are advised to be carried out in order to validate these findings on a larger sample and to focus on more specific areas of scrum project management implementation.
Resumo:
Corporate decision to scale Agile Software development methodologies in offshoring environment has been obstructed due to possible challenges in scaling agile as agile methodologies are regarded to be suitable for small project and co-located team only. Although model such as Agile Scaling Model (ASM) has been developed for scaling Agile with different factors, inabilities of companies to figure out challenges and addressing them lead to failure of project rather than gaining the benefits of using agile methodologies. This failure can be avoided, when scaling agile in IT offshoring environment, by determining key challenges associated in scaling agile in IT offshoring environment and then preparing strategies for addressing those key challenges. These key challenges in scaling agile with IT offshoring environment can be determined by studying issues related with Offshoring and Agile individually and also considering the positive impact of agile methodology in offshoring environment. Then, possible strategies to tackle these key challenges are developed according to the nature of individual challenges and utilizing the benefits of different agile methodologies to address individual situation. Thus, in this thesis, we proposed strategy of using hybrid agile method, which is increasing trend due to adaptive nature of Agile. Determination of the key challenges and possible strategies for tackling those challenges are supported with the survey conducted in the researched organization.
Resumo:
The starting point of this study is that the prevailing way to consider the Finnish IT industries and industry information often results in a limited and even skewed picture of the sector. The purpose of the study is to contribute and increase knowledge and understanding of the status, structure and evolution of the Finnish IT industries as well as the Finnish IT vendor field and competition. The focus is on software product and IT services industries which form a crucial part of all ICT industries. This study examines the Finnish IT sector from production (supply) as well as market (demand) perspective. The study is based on empirical information from multiple sources. Three research questions were formulated for the study. The first concerns the status of the Finnish IT industries considered by applying theoretical frameworks. The second research question targets at the basis for the future evolution of the Finnish IT industries and, finally, the third at the ability of the available definitions and indicators to describe the Finnish IT industries and IT markets. Major structural changes like technological changes and related innovations, globalization and new business models are drivers of the evolution of the IT industries. The findings of this study emphasize the significant role of IT services in the Finnish IT sector and in connection to that the ability to combine IT service skills, competences and practices with high level software skills also in the future. According to the study the Finnish IT enterprises and their customers have become increasingly dependent on global ecosystems and platforms, applications and IT services provided by global vendors. As a result, more IT decisions are made outside Finland. In addition, IT companies are facing new competition from other than IT industries bringing into market new substitutes. To respond to the new competition, IT firms seek growth by expanding beyond their traditional markets.. The changing global division of labor accentuates the need for accurate information of the IT sector but, at the same time, also makes it increasingly challenging to acquire the information needed. One of the main contributions of this study is to provide frameworks for describing the Finnish IT sector and its evolution. These frameworks help combine empirical information from various sources and make it easier to concretize the structures, volumes, relationships and interaction of both, the production and market side of the Finnish IT industry. Some frameworks provide tools to analyze the vendor field, competition and the basis for the future evolution of the IT industries. The observations of the study support the argument that static industry definitions and related classifications do not serve the information needs in dynamic industries, such as the IT industries. One of the main messages of this study is to emphasize the importance of understanding the definitions and starting points of different information sources. Simultaneously, in the structure and evolution of Finnish IT industries the number of employees has become a more valid and reliable measure than the revenue based indicators.
Resumo:
Objetivos: analisar a relação entre valores de pH no nascimento, testes de vitalidade fetal e resultados neonatais. Métodos: foram incluídas 1346 pacientes com gestação de alto risco atendidas no Setor de Vitalidade Fetal do HCFMUSP. Para estudo do bem-estar fetal foram realizados exames de cardiotocografia, perfil biofísico fetal e índice de líquido amniótico. Após o parto foram obtidos os seguintes parâmetros dos recém-nascidos: idade gestacional no parto, sexo e peso dos recém-nascidos, índices de Apgar de 1º e 5º minutos, pH da artéria umbilical no nascimento e a ocorrência de óbito neonatal. Para estudo destes resultados neonatais, os casos foram divididos em quatro grupos: G1 (pH <7,05), G2 (pH de 7,05 a 7,14); G3 (pH de 7,15 a 7,19) e G4 (pH > ou = 7,20). Resultados: a cardiotocografia anormal relacionou-se com valores de pH inferiores a 7,20 (p = 0,001). Resultados anormais do perfil biofísico fetal (<=4) foram mais freqüentes à medida que os valores de pH decresceram (p<0,001). Resultados neonatais adversos relacionaram-se à presença de acidose no nascimento, sendo selecionados para o ajuste do modelo de regressão logística. Este modelo revelou que o "odds ratio" referente a cada condição neonatal eleva-se significativamente com o decréscimo do pH no nascimento. Conclusões: observa-se correlação significativa entre valores de pH no nascimento e resultados neonatais, sendo possível estimar o risco neonatal a que é exposto o produto conceptual utilizando-se do pH no nascimento.
Resumo:
Objetivos: analisar, em gestações de alto risco com diagnóstico de oligoidrâmnio, os resultados dos testes de avaliação da vitalidade fetal e os resultados perinatais. Métodos: foram selecionadas retrospectivamente 572 gestações de alto risco com diagnóstico de oligoidrâmnio, caracterizado por ILA inferior ou igual a 5,0 cm. Destas, 220 apresentavam diagnóstico de oligoidrâmnio grave (ILA <=3,0 cm). Os testes de avaliação da vitalidade fetal incluíram: cardiotocografia anteparto de repouso, perfil biofísico fetal (PBF) e dopplervelocimetria das artérias umbilical e cerebral média. Não foram incluídas as gestações múltiplas, com anomalias fetais e rotura prematura de membranas. Resultados: o grupo de gestantes com diagnóstico de oligoidrâmnio grave (ILA <=3 cm) apresentou cardiotocografia anteparto suspeita ou alterada em 23,2% dos casos, PBF alterado em 10,5%, dopplervelocimetria da artéria cerebral média com sinais de centralização (54,5%), recém-nascidos pequenos para a idade gestacional (32,7%) e líquido amniótico meconial (27,9%). Estes valores foram significativamente mais elevados do que os do grupo com ILA entre 3,1 e 5,0 cm. Este grupo apresentou cardiotocografia anteparto suspeita ou alterada em 14,9% dos casos, PBF alterado em 4,3%, dopplervelocimetria da artéria cerebral média com sinais de centralização em 33,9%, recém-nascidos pequenos para a idade gestacional em 21,0% e líquido amniótico meconial em 16,8% dos casos. Conclusões: a caracterização da gravidade do oligoidrâmnio permite discriminar, nas gestações de alto risco, os casos que se associam a pior resultado perinatal.
Resumo:
Objetivos: avaliar os resultados maternos e perinatais de pacientes submetidas à curva glicêmica com 100 g de glicose, de acordo com três diferentes critérios diagnósticos. Métodos: realizou-se estudo do tipo corte transversal, incluindo 210 pacientes assistidas no Instituto Materno-Infantil de Pernambuco (IMIP), submetidas durante a gravidez ao teste oral de tolerância à glicose 100 g (TOTG), com gestação única, sem história de diabete ou intolerância aos carboidratos prévia à gestação e cujo parto foi assistido no IMIP. Estas foram classificadas nos grupos: controles, pacientes com hiperglicemia leve, diabete gestacional (DG) de acordo com os critérios de Bertini, de Carpenter e Coustan e do "National Diabetes Data Group" (NDDG). Analisaram-se esses grupos, buscando-se associação entre a classificação das pacientes nos grupos e a presença de pré-eclâmpsia, recém-nascidos grandes para a idade gestacional (GIG) e freqüência de cesarianas e natimortos, comparando-se ainda as médias de peso ao nascer. Resultados: a freqüência de DG de acordo com os critérios de Bertini, de Carpenter e Coustan e do NDDG foi de 48,1, 18,1, e 9%, respectivamente, ao passo que a freqüência de hiperglicemia leve foi de 10,5%. A idade das pacientes aumentou progressivamente de acordo com o maior grau de intolerância aos carboidratos. Os grupos não diferiram quanto à freqüência de GIG, cesarianas, natimortos e médias de peso ao nascer. Verificou-se aumento significativo da incidência de pré-eclâmpsia em pacientes com hiperglicemia e DG por Carpenter e Coustan, mas não nos outros grupos. Conclusões: a prevalência de diabete gestacional encontrada variou entre 9 e 48%, de acordo com os diversos critérios, mas não se observaram diferenças significativas nos resultados maternos e perinatais entre os grupos. Critérios muito rígidos de diagnóstico podem levar a diagnóstico excessivo, sem melhora subseqüente do prognóstico perinatal.
Resumo:
OBJETIVOS: estabelecer a freqüência da toxoplasmose aguda em gestantes, a taxa de transmissão vertical e o resultado perinatal dos fetos infectados. Objetivou-se, ainda, avaliar a relação entre os principais testes materno-fetais de diagnóstico da toxoplasmose durante a gestação, bem como a relação entre faixa etária e a infecção aguda pelo Toxoplasma gondii. MÉTODOS: estudo prospectivo longitudinal com 32.512 gestantes submetidas à triagem pré-natal pelo Programa de Proteção à Gestante de Mato Grosso do Sul, no período de novembro de 2002 a outubro de 2003. Utilizaram-se método ELISA (IgG e IgM) e teste de avidez de anticorpos IgG para diagnóstico da toxoplasmose materna, e PCR no líquido amniótico, para diagnóstico da infecção fetal. A avaliação das variáveis foi feita pelas médias, ao passo que a correlação entre algumas variáveis foi avaliada pelo teste do c² e teste de Fisher bicaudado em tabelas de contingência de dupla entrada. RESULTADOS: encontrou-se freqüência de 0,42% para a infecção aguda pelo T. gondii na população de gestantes, sendo 92% delas expostas previamente à infecção e 8% suscetíveis. Nas gestantes com sorologia IgM reagente, a faixa etária variou de 14 a 39 anos, com média de 23±5,9 anos. Não houve relação significativa estatisticamente entre faixa etária e infecção materna aguda pelo T. gondii (p=0,73). Verificou-se taxa de transmissão vertical de 3,9%. Houve relação estatisticamente significativa (p=0,001) entre o teste de avidez (IgG) baixo (<30%) e presença de infecção fetal, e ausência de toxoplasmose fetal quando a avidez apresentava-se elevada (>60%). Houve associação significativa estatisticamente (p=0,001) entre infecção fetal (PCR em líquido amniótico) e infecção neonatal. CONCLUSÕES: a freqüência da toxoplasmose aguda materna apresentou-se abaixo do observado em outras investigações no Brasil. Entretanto a taxa de transmissão vertical não foi discordante do encontrado em outros estudos. O teste de avidez dos anticorpos IgG, quando associado à idade gestacional e data de realização do exame, mostrou-se útil para orientar a terapêutica e avaliar o risco de transmissão vertical, permitindo afastá-lo quando havia avidez elevada previamente a 12 semanas. O PCR positivo foi associado à pior prognóstico neonatal, demonstrando-se método específico para diagnóstico intra-útero da infecção fetal.