908 resultados para service development process


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Historical Timeline of the planning and development process for the establishment of the Medical Library.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research examines evolving issues in applied computer science and applies economic and business analyses as well. There are two main areas. The first is internetwork communications as embodied by the Internet. The goal of the research is to devise an efficient pricing, prioritization, and incentivization plan that could be realistically implemented on the existing infrastructure. Criteria include practical and economic efficiency, and proper incentives for both users and providers. Background information on the evolution and functional operation of the Internet is given, and relevant literature is surveyed and analyzed. Economic analysis is performed on the incentive implications of the current pricing structure and organization. The problems are identified, and minimally disruptive solutions are proposed for all levels of implementation to the lowest level protocol. Practical issues are considered and performance analyses are done. The second area of research is mass market software engineering, and how this differs from classical software engineering. Software life-cycle revenues are analyzed and software pricing and timing implications are derived. A profit maximizing methodology is developed to select or defer the development of software features for inclusion in a given release. An iterative model of the stages of the software development process is developed, taking into account new communications capabilities as well as profitability. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This single-case study provides a description and explanation of selected adult students' perspectives on the impact that the development of an experiential learning portfolio had on their understanding of their professional and personal lives. The conceptual framework that undergirded the study included theoretical and empirical studies on adult learning, experiential learning, and the academic quality of nontraditional degree programs with a portfolio component. The study employed qualitative data collection techniques of individual interviews, document review, field notes, and researcher journal. A purposive sample of 8 adult students who completed portfolios as a component of their undergraduate degrees participated in the study. The 4 male and 4 female students who were interviewed represented 4 ethnic/racial groups and ranged in age from 32 to 55 years. Each student's portfolio was read prior to the interview to frame the semi-structured interview questions in light of written portfolio documents. ^ Students were interviewed twice over a 3-month period. The study lasted 8 months from data collection to final presentation of the findings. The data from interview transcriptions and student portfolios were analyzed, categorized, coded, and sorted into 4 major themes and 2 additional themes and submitted to interpretive analysis. ^ Participants' attitudes, perceptions, and opinions of their learning from the portfolio development experience were presented in the findings, which were illustrated through the use of excerpts from interview responses and individual portfolios. The participants displayed a positive reaction to the learning they acquired from the portfolio development process, regardless of their initial concerns about the challenges of creating a portfolio. Concerns were replaced by a greater recognition and understanding of their previous professional and personal accomplishments and their ability to reach future goals. Other key findings included (a) a better understanding of the role work played in their learning and development, (b) a deeper recognition of the impact of mentors and role models throughout their lives, (c) an increase in writing and organizational competencies, and (d) a sense of self-discovery and personal empowerment. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this qualitative study was to gain an understanding of what participation in a first year residential learning community meant to students 2–3 years after their involvement in the program. Various theories including environmental, student involvement, psychosocial and intellectual, were used as a framework for this case study. Each of the ten participants was a junior or senior level student at the time of the study, but had previously participated in a first year residential learning community at Florida International University. The researcher held two semi-structured interviews with each participant, and collected data sheets from each. ^ The narrative data produced from the interviews were transcribed, coded and analyzed to gain insights into the experiences and perspectives of the participants. Member checking was used after the interview process. A peer reviewer offered feedback during the data analysis. The resulting data was coded into categories, with a final selection of four themes and 15 sub-themes, which captured the essence of the participants' experiences. The four major themes included: (a) community, (b) involvement, (c) identity, and (d) academics. The community theme is used to describe how students perceived the environment to be. The involvement theme is used to describe the students' participation in campus life and their interaction with other members of the university community. The identity theme is used to describe the students' process of development, and the personal growth they underwent as a result of their experiences. The academics theme refers to the intellectual development of students and their interaction around academic issues. ^ The results of this study showed that the participants valued greatly their involvement in the First Year Residents Succeeding Together program (FYRST) and can articulate how it helped them succeed as students. In describing their experience, they most recall the sense of community that existed, the personal growth they experienced, the academic development process they went through, and their involvement, both with other people and with activities in their community. Recommendations are provided for practice and research, including several related to enhancing the academic culture, integrating faculty, utilizing peer influence and providing further opportunities to create a seamless learning environment. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation introduces an integrated algorithm for a new application dedicated at discriminating between electrodes leading to a seizure onset and those that do not, using interictal subdural EEG data. The significance of this study is in determining among all of these channels, all containing interictal spikes, why some electrodes eventually lead to seizure while others do not. A first finding in the development process of the algorithm is that these interictal spikes had to be asynchronous and should be located in different regions of the brain, before any consequential interpretations of EEG behavioral patterns are possible. A singular merit of the proposed approach is that even when the EEG data is randomly selected (independent of the onset of seizure), we are able to classify those channels that lead to seizure from those that do not. It is also revealed that the region of ictal activity does not necessarily evolve from the tissue located at the channels that present interictal activity, as commonly believed.^ The study is also significant in terms of correlating clinical features of EEG with the patient's source of ictal activity, which is coming from a specific subset of channels that present interictal activity. The contributions of this dissertation emanate from (a) the choice made on the discriminating parameters used in the implementation, (b) the unique feature space that was used to optimize the delineation process of these two type of electrodes, (c) the development of back-propagation neural network that automated the decision making process, and (d) the establishment of mathematical functions that elicited the reasons for this delineation process. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. ^ The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as (1) closure or connectedness within the group, (2) bridging ties which extend outside of the group, and (3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. ^ The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. ^ Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research addresses the problem of cost estimation for product development in engineer-to-order (ETO) operations. An ETO operation starts the product development process with a product specification and ends with delivery of a rather complicated, highly customized product. ETO operations are practiced in various industries such as engineering tooling, factory plants, industrial boilers, pressure vessels, shipbuilding, bridges and buildings. ETO views each product as a delivery item in an industrial project and needs to make an accurate estimation of its development cost at the bidding and/or planning stage before any design or manufacturing activity starts. ^ Many ETO practitioners rely on an ad hoc approach to cost estimation, with use of past projects as reference, adapting them to the new requirements. This process is often carried out on a case-by-case basis and in a non-procedural fashion, thus limiting its applicability to other industry domains and transferability to other estimators. In addition to being time consuming, this approach usually does not lead to an accurate cost estimate, which varies from 30% to 50%. ^ This research proposes a generic cost modeling methodology for application in ETO operations across various industry domains. Using the proposed methodology, a cost estimator will be able to develop a cost estimation model for use in a chosen ETO industry in a more expeditious, systematic and accurate manner. ^ The development of the proposed methodology was carried out by following the meta-methodology as outlined by Thomann. Deploying the methodology, cost estimation models were created in two industry domains (building construction and the steel milling equipment manufacturing). The models are then applied to real cases; the cost estimates are significantly more accurate than the actual estimates, with mean absolute error rate of 17.3%. ^ This research fills an important need of quick and accurate cost estimation across various ETO industries. It differs from existing approaches to the problem in that a methodology is developed for use to quickly customize a cost estimation model for a chosen application domain. In addition to more accurate estimation, the major contributions are in its transferability to other users and applicability to different ETO operations. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Storage is a central part of computing. Driven by exponentially increasing content generation rate and a widening performance gap between memory and secondary storage, researchers are in the perennial quest to push for further innovation. This has resulted in novel ways to "squeeze" more capacity and performance out of current and emerging storage technology. Adding intelligence and leveraging new types of storage devices has opened the door to a whole new class of optimizations to save cost, improve performance, and reduce energy consumption. In this dissertation, we first develop, analyze, and evaluate three storage extensions. Our first extension tracks application access patterns and writes data in the way individual applications most commonly access it to benefit from the sequential throughput of disks. Our second extension uses a lower power flash device as a cache to save energy and turn off the disk during idle periods. Our third extension is designed to leverage the characteristics of both disks and solid state devices by placing data in the most appropriate device to improve performance and save power. In developing these systems, we learned that extending the storage stack is a complex process. Implementing new ideas incurs a prolonged and cumbersome development process and requires developers to have advanced knowledge of the entire system to ensure that extensions accomplish their goal without compromising data recoverability. Futhermore, storage administrators are often reluctant to deploy specific storage extensions without understanding how they interact with other extensions and if the extension ultimately achieves the intended goal. We address these challenges by using a combination of approaches. First, we simplify the storage extension development process with system-level infrastructure that implements core functionality commonly needed for storage extension development. Second, we develop a formal theory to assist administrators deploy storage extensions while guaranteeing that the given high level goals are satisfied. There are, however, some cases for which our theory is inconclusive. For such scenarios we present an experimental methodology that allows administrators to pick an extension that performs best for a given workload. Our evaluation demostrates the benefits of both the infrastructure and the formal theory.