7 resultados para Application software

em DigitalCommons@University of Nebraska - Lincoln


Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has and can be expected to have.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many tools and techniques for addressing software maintenance problems rely on code coverage information. Often, this coverage information is gathered for a specific version of a software system, and then used to perform analyses on subsequent versions of that system without being recalculated. As a software system evolves, however, modifications to the software alter the software’s behavior on particular inputs, and code coverage information gathered on earlier versions of a program may not accurately reflect the coverage that would be obtained on later versions. This discrepancy may affect the success of analyses dependent on code coverage information. Despite the importance of coverage information in various analyses, in our search of the literature we find no studies specifically examining the impact of software evolution on code coverage information. Therefore, we conducted empirical studies to examine this impact. The results of our studies suggest that even relatively small modifications can greatly affect code coverage information, and that the degree of impact of change on coverage may be difficult to predict.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults, and stories abound of spreadsheet faults that have led to multi-million dollar losses. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software product line (SPL) engineering offers several advantages in the development of families of software products such as reduced costs, high quality and a short time to market. A software product line is a set of software intensive systems, each of which shares a common core set of functionalities, but also differs from the other products through customization tailored to fit the needs of individual groups of customers. The differences between products within the family are well-understood and organized into a feature model that represents the variability of the SPL. Products can then be built by generating and composing features described in the feature model. Testing of software product lines has become a bottleneck in the SPL development lifecycle, since many of the techniques used in their testing have been borrowed from traditional software testing and do not directly take advantage of the similarities between products. This limits the overall gains that can be achieved in SPL engineering. Recent work proposed by both industry and the research community for improving SPL testing has begun to consider this problem, but there is still a need for better testing techniques that are tailored to SPL development. In this thesis, I make two primary contributions to software product line testing. First I propose a new definition for testability of SPLs that is based on the ability to re-use test cases between products without a loss of fault detection effectiveness. I build on this idea to identify elements of the feature model that contribute positively and/or negatively towards SPL testability. Second, I provide a graph based testing approach called the FIG Basis Path method that selects products and features for testing based on a feature dependency graph. This method should increase our ability to re-use results of test cases across successive products in the family and reduce testing effort. I report the results of a case study involving several non-trivial SPLs and show that for these objects, the FIG Basis Path method is as effective as testing all products, but requires us to test no more than 24% of the products in the SPL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mail survey was conducted to assess current computer hardware use and perceived needs of potential users for software related to crop pest management in Nebraska. Surveys were sent to University of Nebraska-Lincoln agricultural extension agents, agribusiness personnel (including independent crop consultants), and crop producers identified by extension agents as computer users. There were no differences between the groups in several aspects of computer hardware use (percentage computer use, percentage IBM-compatible computer, amount of RAM memory, percentage with hard drive, hard drive size, or monitor graphics capability). Responses were similar among the three groups in several areas that are important to crop pest management (pest identification, pest biology, treatment decision making, control options, and pesticide selection), and a majority of each group expressed the need for additional sources of such information about insects, diseases, and weeds. However, agents mentioned vertebrate pest management information as a need more often than the other two groups. Also, majorities of each group expressed an interest in using computer software, if available, to obtain information in these areas. Appropriate software to address these needs should find an audience among all three groups.