24 resultados para Hardware and software


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gaining economic benefits from substantially lower labor costs has been reported as a major reason for offshoring labor-intensive information systems services to low-wage countries. However, if wage differences are so high, why is there such a high level of variation in the economic success between offshored IS projects? This study argues that offshore outsourcing involves a number of extra costs for the ^his paper was recommended for acceptance by Associate Guest Editor Erran Carmel. client organization that account for the economic failure of offshore projects. The objective is to disaggregate these extra costs into their constituent parts and to explain why they differ between offshored software projects. The focus is on software development and maintenance projects that are offshored to Indian vendors. A theoretical framework is developed a priori based on transaction cost economics (TCE) and the knowledge-based view of the firm, comple mented by factors that acknowledge the specific offshore context The framework is empirically explored using a multiple case study design including six offshored software projects in a large German financial service institution. The results of our analysis indicate that the client incurs post contractual extra costs for four types of activities: (1) re quirements specification and design, (2) knowledge transfer, (3) control, and (4) coordination. In projects that require a high level of client-specific knowledge about idiosyncratic business processes and software systems, these extra costs were found to be substantially higher than in projects where more general knowledge was needed. Notably, these costs most often arose independently from the threat of oppor tunistic behavior, challenging the predominant TCE logic of market failure. Rather, the client extra costs were parti cularly high in client-specific projects because the effort for managing the consequences of the knowledge asymmetries between client and vendor was particularly high in these projects. Prior experiences of the vendor with related client projects were found to reduce the level of extra costs but could not fully offset the increase in extra costs in highly client-specific projects. Moreover, cultural and geographic distance between client and vendor as well as personnel turnover were found to increase client extra costs. Slight evidence was found, however, that the cost-increasing impact of these factors was also leveraged in projects with a high level of required client-specific knowledge (moderator effect).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rapid growth of object-oriented development over the past twenty years has given rise to many object-oriented systems that are large, complex and hard to maintain. Object-Oriented Reengineering Patterns addresses the problem of understanding and reengineering such object-oriented legacy systems. This book collects and distills successful techniques in planning a reengineering project, reverse-engineering, problem detection, migration strategies and software redesign. The material in this book is presented as a set of "reengineering patterns" --- recurring solutions that experts apply while reengineering and maintaining object-oriented systems. The principles and techniques described in this book have been observed and validated in a number of industrial projects, and reflect best practice in object-oriented reengineering.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software developers often ask questions about software systems and software ecosystems that entail exploration and navigation, such as who uses this component?, and where is this feature implemented?. Software visualisation can be a great aid to understanding and exploring the answers to such questions, but visualisations require expertise to implement effectively, and they do not always scale well to large systems. We propose to automatically generate software visualisations based on software models derived from open source software corpora and from an analysis of the properties of typical developers queries and commonly used visualisations. The key challenges we see are (1) understanding how to match queries to suitable visualisations, and (2) scaling visualisations effectively to very large software systems and corpora. In the paper we motivate the idea of automatic software visualisation, we enumerate the challenges and our proposals to address them, and we describe some very initial results in our attempts to develop scalable visualisations of open source software corpora.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamically typed languages lack information about the types of variables in the source code. Developers care about this information as it supports program comprehension. Ba- sic type inference techniques are helpful, but may yield many false positives or negatives. We propose to mine information from the software ecosys- tem on how frequently given types are inferred unambigu- ously to improve the quality of type inference for a single system. This paper presents an approach to augment existing type inference techniques by supplementing the informa- tion available in the source code of a project with data from other projects written in the same language. For all available projects, we track how often messages are sent to instance variables throughout the source code. Predictions for the type of a variable are made based on the messages sent to it. The evaluation of a proof-of-concept prototype shows that this approach works well for types that are sufficiently popular, like those from the standard librarie, and tends to create false positives for unpopular or domain specific types. The false positives are, in most cases, fairly easily identifiable. Also, the evaluation data shows a substantial increase in the number of correctly inferred types when compared to the non-augmented type inference.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spurred by the consumer market, companies increasingly deploy smartphones or tablet computers in their operations. However, unlike private users, companies typically struggle to cover their needs with existing applications, and therefore expand mobile software platforms through customized applications from multiple software vendors. Companies thereby combine the concepts of multi-sourcing and software platform ecosystems in a novel platform-based multi-sourcing setting. This implies, however, the clash of two different approaches towards the coordination of the underlying one-to-many inter-organizational relationships. So far, however, little is known about impacts of merging coordination approaches. Relying on convention theory, we addresses this gap by analyzing a platform-based multi-sourcing project between a client and six software vendors, that develop twenty-three custom-made applications on a common platform (Android). In doing so, we aim to understand how unequal coordination approaches merge, and whether and for what reason particular coordination mechanisms, design decisions, or practices disappear, while new ones emerge.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The important task to observe the global coverage of middle atmospheric trace gases like water vapor or ozone usually is accomplished by satellites. Climate and atmospheric studies rely upon the knowledge of trace gas distributions throughout the stratosphere and mesosphere. Many of these gases are currently measured from satellites, but it is not clear whether this capability will be maintained in the future. This could lead to a significant knowledge gap of the state of the atmosphere. We explore the possibilities of mapping middle atmospheric water vapor in the Northern Hemisphere by using Lagrangian trajectory calculations and water vapor profile data from a small network of five ground-based microwave radiometers. Four of them are operated within the frame of NDACC (Network for the Detection of Atmospheric Composition Change). Keeping in mind that the instruments are based on different hardware and calibration setups, a height-dependent bias of the retrieved water vapor profiles has to be expected among the microwave radiometers. In order to correct and harmonize the different data sets, the Microwave Limb Sounder (MLS) on the Aura satellite is used to serve as a kind of traveling standard. A domain-averaging TM (trajectory mapping) method is applied which simplifies the subsequent validation of the quality of the trajectory-mapped water vapor distribution towards direct satellite observations. Trajectories are calculated forwards and backwards in time for up to 10 days using 6 hourly meteorological wind analysis fields. Overall, a total of four case studies of trajectory mapping in different meteorological regimes are discussed. One of the case studies takes place during a major sudden stratospheric warming (SSW) accompanied by the polar vortex breakdown; a second takes place after the reformation of stable circulation system. TM cases close to the fall equinox and June solstice event from the year 2012 complete the study, showing the high potential of a network of ground-based remote sensing instruments to synthesize hemispheric maps of water vapor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The important task to observe the global coverage of middle atmospheric trace gases like water vapor or ozone usually is accomplished by satellites. Climate and atmospheric studies rely upon the knowledge of trace gas distributions throughout the stratosphere and mesosphere. Many of these gases are currently measured from satellites, but it is not clear whether this capability will be maintained in the future. This could lead to a significant knowledge gap of the state of the atmosphere. We explore the possibilities of mapping middle atmospheric water vapor in the Northern Hemisphere by using Lagrangian trajectory calculations and water vapor profile data from a small network of five ground-based microwave radiometers. Four of them are operated within the frame of NDACC (Network for the Detection of Atmospheric Composition Change). Keeping in mind that the instruments are based on different hardware and calibration setups, a height-dependent bias of the retrieved water vapor profiles has to be expected among the microwave radiometers. In order to correct and harmonize the different data sets, the Microwave Limb Sounder (MLS) on the Aura satellite is used to serve as a kind of traveling standard. A domain-averaging TM (trajectory mapping) method is applied which simplifies the subsequent validation of the quality of the trajectory-mapped water vapor distribution towards direct satellite observations. Trajectories are calculated forwards and backwards in time for up to 10 days using 6 hourly meteorological wind analysis fields. Overall, a total of four case studies of trajectory mapping in different meteorological regimes are discussed. One of the case studies takes place during a major sudden stratospheric warming (SSW) accompanied by the polar vortex breakdown; a second takes place after the reformation of stable circulation system. TM cases close to the fall equinox and June solstice event from the year 2012 complete the study, showing the high potential of a network of ground-based remote sensing instruments to synthesize hemispheric maps of water vapor.