9 resultados para small software project
em Digital Commons at Florida International University
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. ^ The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as (1) closure or connectedness within the group, (2) bridging ties which extend outside of the group, and (3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. ^ The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. ^ Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software. ^
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as 1) closure or connectedness within the group, 2) bridging ties which extend outside of the group, and 3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software.
Resumo:
The subject of dropout prevention/reduction is deservedly receiving attention as a problem that, if not resolved, could threaten our national future.^ This study investigates a small segment of the overall dropout problem, which has apparently unique features of program design and population selection. The evidence presented here should add to the knowledge bank of this complicated problem.^ Project Trio was one of a number of dropout prevention programs and activities which were conducted in Dade County school years 1984-85 and 1985-86, and it is here investigated longitudinally through the end of the 1987-88 school year. It involved 17 junior and senior high schools, and 27 programs, 10 the first year and 17 the second, with over 1,000 total students, who had been selected by the schools from a list of the "at risk" students provided by the district, and were divided approximately evenly into the classical research design of an experimental group and the control group, which following standard procedure was to take the regular school curriculum. No school had more than 25 students in either group.^ Each school modified the basic design of the project to accommodate the individual school characteristics and the perceived needs of their students; however all schools projects were to include some form of academic enhancement, counseling and career awareness study.^ The conclusion of this study was that the control group had a significantly lower dropout rate than the experimental group. Though impossible to make a certain determination of the reasons for this unexpected result, it appears from evidence presented that one cause may have been inadequate administration at the local level.^ This study was also a longitudinal investigation of the "at risk" population as a whole for the three and four year period, to determine if academic factors were present in records may be used to identify dropout proneness.^ A significant correlation was found between dropping out and various measures including scores on the Quality of School Life Instrument, attendance, grade point averages, mathematics grades, and overage in grade, important identifiers in selection for dropout prevention programs. ^
Resumo:
Since the mid-1990s, the United States has experienced a shortage of scientists and engineers, declining numbers of students choosing these fields as majors, and low student success and retention rates in these disciplines. Learning theorists, educational researchers, and practitioners believe that learning environments can be created so that an improvement in the numbers of students who complete courses successfully could be attained (Astin, 1993; Magolda & Terenzini, n.d.; O'Banion, 1997). Learning communities do this by providing high expectations, academic and social support, feedback during the entire educational process, and involvement with faculty, other students, and the institution (Ketcheson & Levine, 1999). ^ A program evaluation of an existing learning community of science, mathematics, and engineering majors was conducted to determine the extent to which the program met its goals and was effective from faculty and student perspectives. The program provided laptop computers, peer tutors, supplemental instruction with and without computer software, small class size, opportunities for contact with specialists in selected career fields, a resource library, and Peer-Led Team Learning. During the two years the project has existed, success, retention, and next-course continuation rates were higher than in traditional courses. Faculty and student interviews indicated there were many affective accomplishments as well. ^ Success and retention rates for one learning community class ( n = 27) and one traditional class (n = 61) in chemistry were collected and compared using Pearson chi square procedures ( p = .05). No statistically significant difference was found between the two groups. Data from an open-ended student survey about how specific elements of their course experiences contributed to success and persistence were analyzed by coding the responses and comparing the learning community and traditional classes. Substantial differences were found in their perceptions about the lecture, the lab, other supports used for the course, contact with other students, helping them reach their potential, and their recommendation about the course to others. Because of the limitation of small sample size, these differences are reported in descriptive terms. ^
Resumo:
The purpose of this research project was to investigate two distinct types of research questions – one theoretical, the other empirical: (1) What would justice mean in the context of the international trade regime? (2.Using the small developing states of the Commonwealth Caribbean as a case study, what do Commonwealth Caribbean trade negotiators mean when they appeal to justice? Regarding the first question, Iris Young's framework which focuses on the achievement of social justice in a domestic context by acknowledging social differences such as those based on race and gender, was adopted and its relevance argued in the international context of interstate trade negotiation so as to validate the notion of (size, location, and governance capacity) difference in this latter context. The point of departure is that while states are typically treated as equals in international law – as are individuals in liberal political theory – there are significant differences between states which warrant different treatment in the international arena. The study found that this re-formulation of justice which takes account of such differences between states, allows for more adequate policy responses than those offered by the presumption of equal treatment. Regarding the second question, this theoretical perspective was used to analyze the understandings of justice from which Commonwealth Caribbean trade negotiators proceed. Interpretive and ethnographic methods, including participant observation, interviews, field notes, and textual analysis, were employed to analyze their understandings of justice. The study found that these negotiators perceive such justice as being justice to difference because of the distinct characteristics of small developing states which combine to constrain their participation in the international trading system; based on this perception, they seek rules and outcomes in the multilateral trade regime which are sensitive to such different characteristics; and while these issues were examined in a specific region, its findings are relevant for other regions consisting of small developing states, such as those in the ACP group.
Resumo:
Enterprise Resource Planning (ERP) systems are software programs designed to integrate the functional requirements, and operational information needs of a business. Pressures of competition and entry standards for participation in major manufacturing supply chains are creating greater demand for small business ERP systems. The proliferation of new offerings of ERP systems introduces complexity to the selection process to identify the right ERP business software for a small and medium-sized enterprise (SME). The selection of an ERP system is a process in which a faulty conclusion poses a significant risk of failure to SME’s. The literature reveals that there are still very high failure rates in ERP implementation, and that faulty selection processes contribute to this failure rate. However, the literature is devoid of a systematic methodology for the selection process for an ERP system by SME’s. This study provides a methodological approach to selecting the right ERP system for a small or medium-sized enterprise. The study employs Thomann’s meta-methodology for methodology development; a survey of SME’s is conducted to inform the development of the methodology, and a case study is employed to test, and revise the new methodology. The study shows that a rigorously developed, effective methodology that includes benchmarking experiences has been developed and successfully employed. It is verified that the methodology may be applied to the domain of users it was developed to serve, and that the test results are validated by expert users and stakeholders. Future research should investigate in greater detail the application of meta-methodologies to supplier selection and evaluation processes for services and software; additional research into the purchasing practices of small firms is clearly needed.^
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Resumo:
The primary purpose of this thesis was to design and create an Interactive Audit to conduct Environmental Site Assessments according to American Society of Testing Material's (ASTM) Phase I Standards at the Wagner Creek study area. ArcPad and ArcIMS are the major software that were used to create the model and ArcGIS Desktop was used for data analysis and to export shapefile symbology to ArcPad. Geographic Information Systems (GIS) is an effective tool to deploy these purposes. This technology was utilized to carry out data collection, data analysis and to display data interactively on the Internet. Electronic forms, customized for mobile devices were used to survey sites. This is an easy and fast way to collect and modify field data. New data such as land use, recognized environmental conditions, and underground storage tanks can be added into existing datasets. An updated map is then generated and uploaded to the Internet using ArcIMS technology. The field investigator has the option to generate and view the Inspection Form at the end of his survey on site, or print a hardcopy at base. The mobile device also automatically generates preliminary editable Executive Reports for any inspected site.
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.