987 resultados para software implementation
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
This study examined the motivation of college and university faculty to implement service-learning into their traditional courses. The benefits derived by faculty, as well as those issues of maintenance, including supports and/or obstacles, were also investigated in relation to their impact on motivation. The focus was on generating theory from the emerging data. ^ Data were collected from interviews with 17 faculty teaching courses that included a component of service-learning. A maximum variation sampling of participants from six South Florida colleges and universities was utilized. Faculty participants represented a wide range of academic disciplines, faculty ranks, years of experience in teaching and using service-learning as well as gender and ethnic diversity. For data triangulation, a focus group with eight additional college faculty was conducted and documents, including course syllabi and institutional service-learning handbooks, collected during the interviews were examined. The interviews were transcribed and coded using traditional methods as well as with the assistance of the computerized assisted qualitative data analysis software, Atlas.ti. The data were organized into five major categories with themes and sub-themes emerging for each. ^ While intrinsic or personal factors along with extrinsic factors all serve to influence faculty motivation, the study's findings revealed that the primary factors influencing faculty motivation to adopt service-learning were those that were intrinsic or personal in nature. These factors included: (a) past experiences, (b) personal characteristics including the value of serving, (c) involvement with community service, (d) interactions and relationships with peers, (e) benefits to students, (f) benefits to teaching, and (g) perceived career benefits. Implications and recommendations from the study encompass suggestions for administrators in higher education institutions for supporting and encouraging faculty adoption of service-learning including a well developed infrastructure as well as incentives, particularly during the initial implementation period, rewards providing recognition for the academic nature of service-learning and support for the development of peer relationships among service-learning faculty. ^
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.
Resumo:
Computing devices have become ubiquitous in our technologically-advanced world, serving as vehicles for software applications that provide users with a wide array of functions. Among these applications are electronic learning software, which are increasingly being used to educate and evaluate individuals ranging from grade school students to career professionals. This study will evaluate the design and implementation of user interfaces in these pieces of software. Specifically, it will explore how these interfaces can be developed to facilitate the use of electronic learning software by children. In order to do this, research will be performed in the area of human-computer interaction, focusing on cognitive psychology, user interface design, and software development. This information will be analyzed in order to design a user interface that provides an optimal user experience for children. This group will test said interface, as well as existing applications, in order to measure its usability. The objective of this study is to design a user interface that makes electronic learning software more usable for children, facilitating their learning process and increasing their academic performance. This study will be conducted by using the Adobe Creative Suite to design the user interface and an Integrated Development Environment to implement functionality. These are digital tools that are available on computing devices such as desktop computers, laptops, and smartphones, which will be used for the development of software. By using these tools, I hope to create a user interface for electronic learning software that promotes usability while maintaining functionality. This study will address the increasing complexity of computing software seen today – an issue that has risen due to the progressive implementation of new functionality. This issue is having a detrimental effect on the usability of electronic learning software, increasing the learning curve for targeted users such as children. As we make electronic learning software an integral part of educational programs in our schools, it is important to address this in order to guarantee them a successful learning experience.
Resumo:
Computing devices have become ubiquitous in our technologically-advanced world, serving as vehicles for software applications that provide users with a wide array of functions. Among these applications are electronic learning software, which are increasingly being used to educate and evaluate individuals ranging from grade school students to career professionals. This study will evaluate the design and implementation of user interfaces in these pieces of software. Specifically, it will explore how these interfaces can be developed to facilitate the use of electronic learning software by children. In order to do this, research will be performed in the area of human-computer interaction, focusing on cognitive psychology, user interface design, and software development. This information will be analyzed in order to design a user interface that provides an optimal user experience for children. This group will test said interface, as well as existing applications, in order to measure its usability. The objective of this study is to design a user interface that makes electronic learning software more usable for children, facilitating their learning process and increasing their academic performance. This study will be conducted by using the Adobe Creative Suite to design the user interface and an Integrated Development Environment to implement functionality. These are digital tools that are available on computing devices such as desktop computers, laptops, and smartphones, which will be used for the development of software. By using these tools, I hope to create a user interface for electronic learning software that promotes usability while maintaining functionality. This study will address the increasing complexity of computing software seen today – an issue that has risen due to the progressive implementation of new functionality. This issue is having a detrimental effect on the usability of electronic learning software, increasing the learning curve for targeted users such as children. As we make electronic learning software an integral part of educational programs in our schools, it is important to address this in order to guarantee them a successful learning experience.
Resumo:
Automated acceptance testing is the testing of software done in higher level to test whether the system abides by the requirements desired by the business clients by the use of piece of script other than the software itself. This project is a study of the feasibility of acceptance tests written in Behavior Driven Development principle. The project includes an implementation part where automated accep- tance testing is written for Touch-point web application developed by Dewire (a software consultant company) for Telia (a telecom company) from the require- ments received from the customer (Telia). The automated acceptance testing is in Cucumber-Selenium framework which enforces Behavior Driven Development principles. The purpose of the implementation is to verify the practicability of this style of acceptance testing. From the completion of implementation, it was concluded that all the requirements from customer in real world can be converted into executable specifications and the process was not at all time-consuming or difficult for a low-experienced programmer like the author itself. The project also includes survey to measure the learnability and understandability of Gherkin- the language that Cucumber understands. The survey consist of some Gherkin exam- ples followed with questions that include making changes to the Gherkin exam- ples. Survey had 3 parts: first being easy, second medium and third most difficult. Survey also had a linear scale from 1 to 5 to rate the difficulty level for each part of the survey. 1 stood for very easy and 5 for very difficult. Time when the partic- ipants began the survey was also taken in order to calculate the total time taken by the participants to learn and answer the questions. Survey was taken by 18 of the employers of Dewire who had primary working role as one of the programmer, tester and project manager. In the result, tester and project manager were grouped as non-programmer. The survey concluded that it is very easy and quick to learn Gherkin. While the participants rated Gherkin as very easy.
Resumo:
At first a small overview is given about the disposition of document ser- vers in the scientific publication process. Then, institutional repositories are introduced by their key features and the benefits of establishing them as a central repository in the university context. A specific solution was chosen on behalf of the requirements of the Uni- versity Library of Kassel, Germany. The software Dspace was chosen but needs to be extended by • internationalization • use of the urn:nbn scheme as persisten identifier. DSpace’s features are shortly described, followed by the process of rever- se engeneering to achieve requirements needed for the implementation of the missing functionality. Adjacent tasks implement the needed featu- res using SUN’s Standard Tag Library for internationalization and some modifications in two classes for use of the urn:nbn scheme as persistent identifier. At the end, a short view on the future of institutional repositories is taken, furthermore some local long-term objectives on DSpace are dis- cussed.
Resumo:
Variability management is one of the major challenges in software product line adoption, since it needs to be efficiently managed at various levels of the software product line development process (e.g., requirement analysis, design, implementation, etc.). One of the main challenges within variability management is the handling and effective visualization of large-scale (industry-size) models, which in many projects, can reach the order of thousands, along with the dependency relationships that exist among them. These have raised many concerns regarding the scalability of current variability management tools and techniques and their lack of industrial adoption. To address the scalability issues, this work employed a combination of quantitative and qualitative research methods to identify the reasons behind the limited scalability of existing variability management tools and techniques. In addition to producing a comprehensive catalogue of existing tools, the outcome form this stage helped understand the major limitations of existing tools. Based on the findings, a novel approach was created for managing variability that employed two main principles for supporting scalability. First, the separation-of-concerns principle was employed by creating multiple views of variability models to alleviate information overload. Second, hyperbolic trees were used to visualise models (compared to Euclidian space trees traditionally used). The result was an approach that can represent models encompassing hundreds of variability points and complex relationships. These concepts were demonstrated by implementing them in an existing variability management tool and using it to model a real-life product line with over a thousand variability points. Finally, in order to assess the work, an evaluation framework was designed based on various established usability assessment best practices and standards. The framework was then used with several case studies to benchmark the performance of this work against other existing tools.
Resumo:
New technologies appear each moment and its use can result in countless benefits for that they directly use and for all the society as well. In this direction, the State also can use the technologies of the information and communication to improve the level of rendering of services to the citizens, to give more quality of life to the society and to optimize the public expense, centering it in the main necessities. For this, it has many research on politics of Electronic Government (e-Gov) and its main effect for the citizen and the society as a whole. This research studies the concept of Electronic Government and wishes to understand the process of implementation of Free Softwares in the agencies of the Direct Administration in the Rio Grande do Norte. Moreover, it deepens the analysis to identify if its implantation results in reduction of cost for the state treasury and intends to identify the Free Software participation in the Administration and the bases of the politics of Electronic Government in this State. Through qualitative interviews with technologies coordinators and managers in 3 State Secretaries it could be raised the ways that come being trod for the Government in order to endow the State with technological capacity. It was perceived that the Rio Grande do Norte still is an immature State in relation to practical of electronic government (e-Gov) and with Free Softwares, where few agencies have factual and viable initiatives in this area. It still lacks of a strategical definition of the paper of Technology and more investments in infrastructure of staff and equipment. One also observed advances as the creation of the normative agency, the CETIC (State Advice of Technology of the Information and Communication), the Managing Plan of Technology that provide a necessary diagnosis with the situation how much Technology in the State and considered diverse goals for the area, the accomplishment of a course of after-graduation for managers of Technology and the training in BrOffice (OppenOffice) for 1120 public servers
Resumo:
Software engineering best practices allow significantly improving the software development. However, the implementation of best practices requires skilled professionals, financial investment and technical support to facilitate implementation and achieve the respective improvement. In this paper we proposes a protocol to design techniques to implement best practices of software engineering. The protocol includes the identification and selection of process to improve, the study of standards and models, identification of best practices associated with the process and the possible implementation techniques. In addition, technical design activities are defined in order to create or adapt the techniques of implementing best practices for software development.
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação do Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Educação Pré-escolar e Ensino do 1.º Ciclo do Ensino Básico.
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação do Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Educação Pré-Escolar e Ensino do 1º Ciclo Básico.
Resumo:
The document begins by describing the problem of budget information units and the high cost of commercial software that specializes in library automation. Describes the origins of free software and its meaning. Mentioned the three levels of automation in library: catalog automation, generation of repositories and full automation. Mentioned the various free software applications for each of the levels and offers a number of advantages and disadvantages in the use of these products. Concludes that the automation project is hard but full of satisfaction, emphasizing that there is no cost-free project, because if it is true that free software is free, there are other costs related to implementation, training and commissioning project progress.
Resumo:
New technologies appear each moment and its use can result in countless benefits for that they directly use and for all the society as well. In this direction, the State also can use the technologies of the information and communication to improve the level of rendering of services to the citizens, to give more quality of life to the society and to optimize the public expense, centering it in the main necessities. For this, it has many research on politics of Electronic Government (e-Gov) and its main effect for the citizen and the society as a whole. This research studies the concept of Electronic Government and wishes to understand the process of implementation of Free Softwares in the agencies of the Direct Administration in the Rio Grande do Norte. Moreover, it deepens the analysis to identify if its implantation results in reduction of cost for the state treasury and intends to identify the Free Software participation in the Administration and the bases of the politics of Electronic Government in this State. Through qualitative interviews with technologies coordinators and managers in 3 State Secretaries it could be raised the ways that come being trod for the Government in order to endow the State with technological capacity. It was perceived that the Rio Grande do Norte still is an immature State in relation to practical of electronic government (e-Gov) and with Free Softwares, where few agencies have factual and viable initiatives in this area. It still lacks of a strategical definition of the paper of Technology and more investments in infrastructure of staff and equipment. One also observed advances as the creation of the normative agency, the CETIC (State Advice of Technology of the Information and Communication), the Managing Plan of Technology that provide a necessary diagnosis with the situation how much Technology in the State and considered diverse goals for the area, the accomplishment of a course of after-graduation for managers of Technology and the training in BrOffice (OppenOffice) for 1120 public servers
Resumo:
Introdução: A cintigrafia óssea é um dos exames mais frequentes em Medicina Nuclear. Esta modalidade de imagem médica requere um balanço apropriado entre a qualidade de imagem e a dose de radiação, ou seja, as imagens obtidas devem conter o número mínimo de contagem necessárias, para que apresentem qualidade considerada suficiente para fins diagnósticos. Objetivo: Este estudo tem como principal objetivo, a aplicação do software Enhanced Planar Processing (EPP), nos exames de cintigrafia óssea em doentes com carcinoma da mama e próstata que apresentam metástases ósseas. Desta forma, pretende-se avaliar a performance do algoritmo EPP na prática clínica em termos de qualidade e confiança diagnóstica quando o tempo de aquisição é reduzido em 50%. Material e Métodos: Esta investigação teve lugar no departamento de Radiologia e Medicina Nuclear do Radboud University Nijmegen Medical Centre. Cinquenta e um doentes com suspeita de metástases ósseas foram administrados com 500MBq de metilenodifosfonato marcado com tecnécio-99m. Cada doente foi submetido a duas aquisições de imagem, sendo que na primeira foi seguido o protocolo standard do departamento (scan speed=8 cm/min) e na segunda, o tempo de aquisição foi reduzido para metade (scan speed=16 cm/min). As imagens adquiridas com o segundo protocolo foram processadas com o algoritmo EPP. Todas as imagens foram submetidas a uma avaliação objetiva e subjetiva. Relativamente à análise subjetiva, três médicos especialistas em Medicina Nuclear avaliaram as imagens em termos da detetabilidade das lesões, qualidade de imagem, aceitabilidade diagnóstica, localização das lesões e confiança diagnóstica. No que respeita à avaliação objetiva, foram selecionadas duas regiões de interesse, uma localizada no terço médio do fémur e outra localizada nos tecidos moles adjacentes, de modo a obter os valores de relação sinal-ruído, relação contraste-ruído e coeficiente de variação. Resultados: Os resultados obtidos evidenciam que as imagens processadas com o software EPP oferecem aos médicos suficiente informação diagnóstica na deteção de metástases, uma vez que não foram encontradas diferenças estatisticamente significativas (p>0.05). Para além disso, a concordância entre os observadores, comparando essas imagens e as imagens adquiridas com o protocolo standard foi de 95% (k=0.88). Por outro lado, no que respeita à qualidade de imagem, foram encontradas diferenças estatisticamente significativas quando se compararam as modalidades de imagem entre si (p≤0.05). Relativamente à aceitabilidade diagnóstica, não foram encontradas diferenças estatisticamente significativas entre as imagens adquiridas com o protocolo standard e as imagens processadas com o EPP software (p>0.05), verificando-se uma concordância entre os observadores de 70.6%. Todavia, foram encontradas diferenças estatisticamente significativas entre as imagens adquiridas com o protocolo standard e as imagens adquiridas com o segundo protocolo e não processadas com o software EPP (p≤0.05). Para além disso, não foram encontradas diferenças estatisticamente significativas (p>0.05) em termos de relação sinal-ruído, relação contraste-ruído e coeficiente de variação entre as imagens adquiridas com o protocolo standard e as imagens processadas com o EPP. Conclusão: Com os resultados obtidos através deste estudo, é possível concluir que o algoritmo EPP, desenvolvido pela Siemens, oferece a possibilidade de reduzir o tempo de aquisição em 50%, mantendo ao mesmo tempo uma qualidade de imagem considerada suficiente para fins de diagnóstico. A utilização desta tecnologia, para além de aumentar a satisfação por parte dos doentes, é bastante vantajosa no que respeita ao workflow do departamento.