947 resultados para component-based software development
Resumo:
The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.
Resumo:
Following a drop in estrogen in the period of menopause some women begin to lose bone mass more than 1% per year reaching the end of five years with loss greater than 25%. In this regard, factors such as older age, low calcium intake and premature menopause favor the onset of osteoporosis. Preventive methods such as nutritional counseling to a proper diet and the support of technology through applications that assess dietary intake are essential. Thus, this study aimed to develop an application for Android® platform focused on the evaluation of nutritional and organic conditions involved in bone health and risks for developing osteoporosis in postmenopausal women. To achieve this goal we proceeded to a study of 72 women aged 46-79 years, from the physical exercise for bone health of the Laboratory for Research in Biochemistry and Densitometry the Federal Technological University of Paraná program. Data were collected in the second half of 2014 through tests Bone Densitometry and Body Composition, Blood Tests, Anthropometric data and Nutrition Assessment. The study included women with a current diagnosis of osteopenia or osteoporosis primary, aged more than 45 years postmenopausal. For the assessment of bone mineral density and body composition used the device Absorptiometry Dual Energy X-ray (DXA) brand Hologic Discovery TM Model A. For anthropometric assessment was included to body mass, height, abdominal circumference, Waist circumference and hip circumference. The instrument for assessing food consumption was used Recall 24 hours a day (24HR). The estimated intake of energy and nutrients was carried from the tabulation of the food eaten in the Software Diet Pro 4®. In a sub sample of 30 women with osteopenia / osteoporosis serum calcium and alkaline phosphatase tests were performed. The results demonstrated a group of women (n = 30) average calcium intake of 570mg / day (± 340). The analysis showed a mean serum calcium within the normal range (10,20mg / dl ± 0.32) and average values and slightly increased alkaline phosphatase (105.40 U / L ± 23.70). Furthermore, there was a significant correlation between the consumption of protein and the optimal daily intake of calcium (0.375 p-value 0.05). Based on these findings, we developed an application early stage in Android® platform operating system Google®, being called OsteoNutri. We chose to use Java Eclipse® where it was executed Android® version of the project; choice of application icons and setting the visual editor for building the application layouts. The DroidDraw® was used for development of the three application GUIs. For practical tests we used a cell compatible with the version that was created (4.4 or higher). The prototype was developed in conjunction with the Group and Instrumentation Applications Development (GDAI) of the Federal Technological University of Paraná. So this application can be considered an important tool in dietary control, allowing closer control consumption of calcium and dietary proteins.
Resumo:
This thesis, titled Governance and Community Capitals, explores the kinds of practical processes that have made governance work in three faith-based schools in the Western Highlands of Papua New Guinea (PNG). To date, the nation of PNG has been unable to meet its stated educational goals; however, some faith-based primary schools have overcome educational challenges by changing their local governance systems. What constitutes good governance in developing countries and how it can be achieved in a PNG schooling context has received very little scholarly attention. In this study, the subject of governance is approached at the nexus between the administrative sciences and asset-based community development. In this space, the researcher provides an understanding of the contribution that community capitals have made to understandings of local forms of governance in the development context. However, by and large, conceptions of governance have a history of being positioned within a Euro-centric frame and very little, if anything is known about the naming of capitals by indigenous peoples. In this thesis, six indigenous community capitals are made visible, expanding the repertoire of extant capitals published to date. The capitals identified and named in this thesis are: Story, Wisdom, Action, Blessing, Name and Unity. In-depth insights into these capitals are provided and through the theoretical idea of performativity, the researcher advances an understanding of how the habitual enactment of the practical components of the capitals made governance work in this unique setting. The study draws from a grounded and appreciative methodology and is based on a case study design incorporating a three-stage cycle of investigation. The first stage tested the application of an assets-based method to documentary sources of data including most significant change stories, community mapping and visual diaries. In the second stage, a group process method relevant to a PNG context was developed and employed. The third stage involved building theory from case study evidence using content analysis, language and metaphorical speech acts as guides for complex analysis. The thesis demonstrates the contribution that indigenous community capitals can make to understanding local forms of governance and how PNG faith-based schools meet their local governance challenges.
Resumo:
The primary goals of this study are to: embed sustainable concepts of energy consumption into certain part of existing Computer Science curriculum for English schools; investigate how to motivate 7-to-11 years old kids to learn these concepts; promote responsive ICT (Information and Communications Technology) use by these kids in their daily life; raise their awareness of today’s ecological challenges. Sustainability-related ICT lessons developed aim to provoke computational thinking and creativity to foster understanding of environmental impact of ICT and positive environmental impact of small changes in user energy consumption behaviour. The importance of including sustainability into the Computer Science curriculum is due to the fact that ICT is both a solution and one of the causes of current world ecological problems. This research follows Agile software development methodology. In order to achieve the aforementioned goals, sustainability requirements, curriculum requirements and technical requirements are firstly analysed. Secondly, the web-based user interface is designed. In parallel, a set of three online lessons (video, slideshow and game) is created for the website GreenICTKids.com taking into account several green design patterns. Finally, the evaluation phase involves the collection of adults’ and kids’ feedback on the following: user interface; contents; user interaction; impacts on the kids’ sustainability awareness and on the kids’ behaviour with technologies. In conclusion, a list of research outcomes is as follows: 92% of the adults learnt more about energy consumption; 80% of the kids are motivated to learn about energy consumption and found the website easy to use; 100% of the kids understood the contents and liked website’s visual aspect; 100% of the kids will try to apply in their daily life what they learnt through the online lessons.
Resumo:
Independientemente de la metodología que se adopte en el desarrollo de software, se contemplan aquellas actividades gerenciales o de dirección del proyecto y las inherentes a las técnicas, propias del desarrollo del producto como tal, como los requerimientos demandados, análisis, diseño, implementación y pruebas o ensayos previos a su materialización -- El presente trabajo se deriva del interés por diseñar una metodología para la gestión de la fase de pruebas y ensayo, con base en el modelo de integración de las actividades contempladas en la guía del PMBOK, la cual es compatible con las funciones de dirección y actividades técnicas de otras metodologías, especialmente en su etapa de prueba; de allí la importancia que representa para los gerentes de proyectos obtener resultados satisfactorios en esta fase, por su impacto directo y significativo en el cumplimiento del tiempo y los costos estimados, lo que permite prevenir o mitigar, tiempos adicionales o sobrecostos por reproceso, evitando ser transferidos al cliente o asumidos por el fabricante de software -- Así mismo, asegurar una ejecución correcta de la fase de pruebas y ensayo, garantiza que el proyecto responda a los estándares de calidad, de acuerdo con sus indicadores de medición y la satisfacción del usuario
Resumo:
A picture tells a thousand words. We all know that. Then why are our development tools showing mainly text with so much obstinacy? Even when visualizations do make it into our tools, they typically do not make it past the periphery. Something is deeply wrong. We argue that visualizations must become pervasive in software development, and to accommodate this goal, the integrated development environments must change significantly.
Resumo:
Doutoramento em Gestão
Resumo:
No desenvolvimento deste Trabalho de Investigação Aplicada, pretende-se responder à questão: Quais os requisitos necessários a implementar numa base de dados relacional de controlos de segurança da informação para Unidades, Estabelecimentos ou Órgãos militares do Exército Português? Deste modo, para se responder a esta questão central, houve necessidade de subdividir esta em quatro questões derivadas, sendo elas: 1. Quais as principais dimensões de segurança da informação ao nível organizacional? 2. Quais as principais categorias de segurança da informação ao nível organizacional? 3. Quais os principais controlos de segurança da informação a implementar numa organização militar? 4. Quais os requisitos funcionais necessários a implementar numa base de dados de controlos de segurança da informação a implementar numa organização militar? Para responder a estas questões de investigação, este trabalho assenta numa investigação aplicada, com o objetivo de desenvolver uma aplicação prática para os conhecimentos adquiridos, materializando-se assim numa base de dados. Ainda, quanto ao objetivo da investigação, este é descritivo, explicativo e exploratório, uma vez que, tem o objetivo de descrever as principais dimensões, categorias e controlos da segurança da informação, assim como o objetivo de explicar quais são os requisitos funcionais necessários a implementar numa base de dados de controlos de segurança da informação. Por último, tem ainda o objetivo de efetuar um estudo exploratório, comprovando a eficácia da base de dados. Esta investigação assenta no método indutivo, partindo de premissas particulares para chegar a conclusões gerais, isto é, a partir de análise de documentos e de inquéritos por entrevista, identificar-se-ão quais são os requisitos funcionais necessários a implementar, generalizando para todas as Unidades, Estabelecimentos ou Órgãos militares do Exército Português. No que corresponde ao método de procedimentos, usar-se-á o método comparativo, com vista a identificar qual é a norma internacional de gestão de segurança de informação mais indicada a registar na base de dados. Por último, como referido anteriormente, no que concerne às técnicas de investigação, será usado o inquérito por entrevista, identificando os requisitos necessários a implementar, e a análise de documentos, identificando as principais dimensões, categoriasou controlos necessários a implementar numa base de dados de controlos de segurança da informação. Posto isto, numa primeira fase da investigação, através da análise de documentos, percecionam-se as principais dimensões, categorias e controlos de segurança da informação necessários a aplicar nas Unidades, Estabelecimentos ou Órgãos militares do Exército Português, por forma a contribuir para o sucesso na gestão da segurança da informação militar. Ainda, através de entrevistas a especialistas da área de segurança da informação e dos Sistemas de Informação nas unidades militares, identificar-se-ão quais os requisitos funcionais necessários a implementar numa base de dados de controlos de segurança da informação a implementar numa organização militar. Por último, numa segunda fase, através do modelo de desenvolvimento de software em cascata revisto, pretende-se desenvolver uma base de dados relacional, em Microsoft Access, de controlos de segurança da Informação a fim de implementar em Unidades, Estabelecimentos ou Órgãos militares do Exército Português. Posteriormente, após o desenvolvimento da base de dados, pretende-se efetuar um estudo exploratório com vista a validar a mesma, de modo a comprovar se esta responde às necessidades para a qual foi desenvolvida.
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Resumo:
Scientific research is increasingly data-intensive, relying more and more upon advanced computational resources to be able to answer the questions most pressing to our society at large. This report presents findings from a brief descriptive survey sent to a sample of 342 leading researchers at the University of Washington (UW), Seattle, Washington in 2010 and 2011 as the first stage of the larger National Science Foundation project “Interacting with Cyberinfrastructure in the Face of Changing Science.” This survey assesses these researcher’s use of advanced computational resources, data, and software in their research. We present high-level findings that describe UW researchers’: demographics, interdisciplinarity, research groups, data use, software and computational use—including software development and use, data storage and transfer activities, and collaboration tools, and computing resources. These findings offer insights into the state of computational resources in use during this time period as well as offering a look at the data intensiveness of UW researchers.
Resumo:
Nesta dissertação descreve-se uma proposta de implementação de uma plataforma de desenvolvimento de Sistemas de Comunicação Aumentativa e Alternativa para programadores, com o objectivo de melhorar a produtividade e diminuir os tempos despendidos na implementação deste tipo de soluções. Esta proposta assenta numa estrutura composta por widgets configuráveis por código e integráveis em novas aplicações, numa filosofia de reaproveitamento de objectos e funcionalidades, permitindo ainda a uniformização da estrutura do código no desenvolvimento de softwares deste tipo. Esta plataforma pretende ainda dar flexibilidade aos programadores, através da possibilidade de introdução de novas funcionalidades e widgets, permitindo também que se testem novas abordagens ao software durante a investigação. A implementação em tecnologias open source independentes da plataforma, permitirá ainda utilizar os objectos deste toolkit em vários sistemas operativos. ABSTRACT: ln this master thesis we describe an implementation proposal for an Augmentative and Alternative Communication Framework for developers, with the objective of improves the productivity and reduces the implementation times for these types of solutions. This proposal is based on a customized widgets structure that can be integrated in new applications, with the purpose of reuse common features of these applications, also allowing standardize the code structure in this kind of software development. This framework intends to provide flexibility to programmers giving them the possibility of introduce new functionalities and widgets, allowing them to test new approaches during research. The implementation based on open-source technologies, platform independent, allows the use of this toolkit in several different operating systems.
Resumo:
Limiti sempre più stringenti sulle emissioni inquinanti ed una maggiore attenzione ai consumi, all'incremento di prestazioni e alla guidabilità, portano allo sviluppo di algoritmi di controllo motore sempre più complicati. Allo stesso tempo, l'unità di propulsione sta diventando un insieme sempre più variegato di sottosistemi che devono lavorare all'unisono. L'ingegnere calibratore si trova di fronte ad una moltitudine di variabili ed algoritmi che devono essere calibrati e testati e necessita di strumenti che lo aiutino ad analizzare il comportamento del motore fornendo risultati sintetici e facilmente accessibili. Nel seguente lavoro è riportato lo sviluppo di un sistema di analisi della combustione: l'obbiettivo è stato quello di sviluppare un software che fornisca le migliori soluzioni per l'analisi di un motore a combustione interna, in termini di accuratezza dei risultati, varietà di calcoli messi a disposizione, facilità di utilizzo ed integrazione con altri sistemi tramite la condivisione dei risultati calcolati in tempo reale.
Resumo:
Internet of Things systems are pervasive systems evolved from cyber-physical to large-scale systems. Due to the number of technologies involved, software development involves several integration challenges. Among them, the ones preventing proper integration are those related to the system heterogeneity, and thus addressing interoperability issues. From a software engineering perspective, developers mostly experience the lack of interoperability in the two phases of software development: programming and deployment. On the one hand, modern software tends to be distributed in several components, each adopting its most-appropriate technology stack, pushing programmers to code in a protocol- and data-agnostic way. On the other hand, each software component should run in the most appropriate execution environment and, as a result, system architects strive to automate the deployment in distributed infrastructures. This dissertation aims to improve the development process by introducing proper tools to handle certain aspects of the system heterogeneity. Our effort focuses on three of these aspects and, for each one of those, we propose a tool addressing the underlying challenge. The first tool aims to handle heterogeneity at the transport and application protocol level, the second to manage different data formats, while the third to obtain optimal deployment. To realize the tools, we adopted a linguistic approach, i.e.\ we provided specific linguistic abstractions that help developers to increase the expressive power of the programming language they use, writing better solutions in more straightforward ways. To validate the approach, we implemented use cases to show that the tools can be used in practice and that they help to achieve the expected level of interoperability. In conclusion, to move a step towards the realization of an integrated Internet of Things ecosystem, we target programmers and architects and propose them to use the presented tools to ease the software development process.
Resumo:
Nowadays the production of increasingly complex and electrified vehicles requires the implementation of new control and monitoring systems. This reason, together with the tendency of moving rapidly from the test bench to the vehicle, leads to a landscape that requires the development of embedded hardware and software to face the application effectively and efficiently. The development of application-based software on real-time/FPGA hardware could be a good answer for these challenges: FPGA grants parallel low-level and high-speed calculation/timing, while the Real-Time processor can handle high-level calculation layers, logging and communication functions with determinism. Thanks to the software flexibility and small dimensions, these architectures can find a perfect collocation as engine RCP (Rapid Control Prototyping) units and as smart data logger/analyser, both for test bench and on vehicle application. Efforts have been done for building a base architecture with common functionalities capable of easily hosting application-specific control code. Several case studies originating in this scenario will be shown; dedicated solutions for protype applications have been developed exploiting a real-time/FPGA architecture as ECU (Engine Control Unit) and custom RCP functionalities, such as water injection and testing hydraulic brake control.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.