9 resultados para developer
em Aston University Research Archive
Resumo:
The traditional waterfall software life cycle model has several weaknesses. One problem is that a working version of a system is unavailable until a late stage in the development; any omissions and mistakes in the specification undetected until that stage can be costly to maintain. The operational approach which emphasises the construction of executable specifications can help to remedy this problem. An operational specification may be exercised to generate the behaviours of the specified system, thereby serving as a prototype to facilitate early validation of the system's functional requirements. Recent ideas have centred on using an existing operational method such as JSD in the specification phase of object-oriented development. An explicit transformation phase following specification is necessary in this approach because differences in abstractions between the two domains need to be bridged. This research explores an alternative approach of developing an operational specification method specifically for object-oriented development. By incorporating object-oriented concepts in operational specifications, the specifications have the advantage of directly facilitating implementation in an object-oriented language without requiring further significant transformations. In addition, object-oriented concepts can help the developer manage the complexity of the problem domain specification, whilst providing the user with a specification that closely reflects the real world and so the specification and its execution can be readily understood and validated. A graphical notation has been developed for the specification method which can capture the dynamic properties of an object-oriented system. A tool has also been implemented comprising an editor to facilitate the input of specifications, and an interpreter which can execute the specifications and graphically animate the behaviours of the specified systems.
Resumo:
Self-adaptation is emerging as an increasingly important capability for many applications, particularly those deployed in dynamically changing environments, such as ecosystem monitoring and disaster management. One key challenge posed by Dynamically Adaptive Systems (DASs) is the need to handle changes to the requirements and corresponding behavior of a DAS in response to varying environmental conditions. Berry et al. previously identified four levels of RE that should be performed for a DAS. In this paper, we propose the Levels of RE for Modeling that reify the original levels to describe RE modeling work done by DAS developers. Specifically, we identify four types of developers: the system developer, the adaptation scenario developer, the adaptation infrastructure developer, and the DAS research community. Each level corresponds to the work of a different type of developer to construct goal model(s) specifying their requirements. We then leverage the Levels of RE for Modeling to propose two complementary processes for performing RE for a DAS. We describe our experiences with applying this approach to GridStix, an adaptive flood warning system, deployed to monitor the River Ribble in Yorkshire, England.
Resumo:
Pervasive environments are characterised by highly heterogeneous services and mobile devices with dynamic availability. Approaches such as that proposed by the Connect project provide means to enable such systems to be discovered and composed, through mediation where necessary. As services appear and disappear, the set of feasible compositions changes. In such a pervasive environment, a designer encounters two related challenges: what goals it is reasonable to pursue in the current context and how to use the services presently available to achieve his goals. This paper proposes an approach to design service compositions, facilitating an interactive process to find the trade-off between the possible and the desirable. Following our approach, the system finds at runtime, where possible, compositions related to the developer's requirements. This process can realise the intent the developer specifies at design time, taking into account the services available at runtime, without a prohibitive level of pre-specification, inappropriate for such dynamic environments. © 2012 ACM.
Resumo:
The behaviour of self adaptive systems can be emergent. The difficulty in predicting the system's behaviour means that there is scope for the system to surprise its customers and its developers. Because its behaviour is emergent, a self-adaptive system needs to garner confidence in its customers and it needs to resolve any surprise on the part of the developer during testing and mainteinance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system's behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, a means needs to be found to explain the current behaviour of the system and the reasons that brought that behaviour about. We propose the use of goal-based models during runtime to offer self-explanation of how a system is meeting its requirements, and why the means of meeting these were chosen. We discuss the results of early experiments in self-explanation, and set out future work. © 2012 C.E.S.A.M.E.S.
Resumo:
The use of spreadsheets has become routine in all aspects of business with usage growing across a range of functional areas and a continuing trend towards end user spreadsheet development. However, several studies have raised concerns about the accuracy of spreadsheet models in general, and of end user developed applications in particular, raising the risk element for users. High error rates have been discovered, even though the users/developers were confident that their spreadsheets were correct. The lack of an easy to use, context-sensitive validation methodology has been highlighted as a significant contributor to the problems of accuracy. This paper describes experiences in using a practical, contingency factor-based methodology for validation of spreadsheet-based DSS. Because the end user is often both the system developer and a stakeholder, the contingency factor-based validation methodology may need to be used in more than one way. The methodology can also be extended to encompass other DSS.
Resumo:
The deployment of bioenergy technologies is a key part of UK and European renewable energy policy. A key barrier to the deployment of bioenergy technologies is the management of biomass supply chains including the evaluation of suppliers and the contracting of biomass. In the undeveloped biomass for energy market buyers of biomass are faced with three major challenges during the development of new bioenergy projects. What characteristics will a certain supply of biomass have, how to evaluate biomass suppliers and which suppliers to contract with in order to provide a portfolio of suppliers that best satisfies the needs of the project and its stakeholder group whilst also satisfying crisp and non-crisp technological constraints. The problem description is taken from the situation faced by the industrial partner in this research, Express Energy Ltd. This research tackles these three areas separately then combines them to form a decision framework to assist biomass buyers with the strategic sourcing of biomass. The BioSS framework. The BioSS framework consists of three modes which mirror the development stages of bioenergy projects. BioSS.2 mode for early stage development, BioSS.3 mode for financial close stage and BioSS.Op for the operational phase of the project. BioSS is formed of a fuels library, a supplier evaluation module and an order allocation module, a Monte-Carlo analysis module is also included to evaluate the accuracy of the recommended portfolios. In each mode BioSS can recommend which suppliers should be contracted with and how much material should be purchased from each. The recommended blend should have chemical characteristics within the technological constraints of the conversion technology and also best satisfy the stakeholder group. The fuels library is made up from a wide variety of sources and contains around 100 unique descriptions of potential biomass sources that a developer may encounter. The library takes a wide data collection approach and has the aim of allowing for estimates to be made of biomass characteristics without expensive and time consuming testing. The supplier evaluation part of BioSS uses a QFD-AHP method to give importance weightings to 27 different evaluating criteria. The evaluating criteria have been compiled from interviews with stakeholders and policy and position documents and the weightings have been assigned using a mixture of workshops and expert interview. The weighted importance scores allow potential suppliers to better tailor their business offering and provides a robust framework for decision makers to better understand the requirements of the bioenergy project stakeholder groups. The order allocation part of BioSS uses a chance-constrained programming approach to assign orders of material between potential suppliers based on the chemical characteristics of those suppliers and the preference score of those suppliers. The optimisation program finds the portfolio of orders to allocate to suppliers to give the highest performance portfolio in the eyes of the stakeholder group whilst also complying with technological constraints. The technological constraints can be breached if the decision maker requires by setting the constraint as a chance-constraint. This allows a wider range of biomass sources to be procured and allows a greater overall performance to be realised than considering crisp constraints or using deterministic programming approaches. BioSS is demonstrated against two scenarios faced by UK bioenergy developers. The first is a large scale combustion power project, the second a small scale gasification project. The Bioss is applied in each mode for both scenarios and is shown to adapt the solution to the stakeholder group importance and the different constraints of the different conversion technologies whilst finding a globally optimal portfolio for stakeholder satisfaction.
Resumo:
Community acceptance has been identified as one of the key requirements for a sustainable bioenergy project. However less attention has been paid to this aspect from developing nations and small projects perspective. Therefore this research examines the role of community acceptance for sustainable small scale bioenergy projects in India. While addressing the aim, this work identifies influence of community over bioenergy projects, major concerns of communities regarding bioenergy projects and factors influencing perceptions of communities about bioenergy projects. The empirical research was carried out on four bioenergy companies in India as case studies. It has been identified that communities have significant influence over bioenergy projects in India. Local air pollution, inappropriate storage of by-products and credibility of developer are identified as some of the important concerns. Local energy needs, benefits to community from bioenergy companies, level of trust on company and relationship between company and the community are some of the prime factors which influence community's perception on bioenergy projects. This research sheds light on important aspects related to community acceptance of bioenergy projects, and this information would help practitioners in understanding the community perceptions and take appropriate actions to satisfy them. © 2014 Elsevier Ltd.
Resumo:
The behaviour of self adaptive systems can be emergent, which means that the system’s behaviour may be seen as unexpected by its customers and its developers. Therefore, a self-adaptive system needs to garner confidence in its customers and it also needs to resolve any surprise on the part of the developer during testing and maintenance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system’s behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, we propose the use of goal-based requirements models at runtime to offer self-explanation of how a system is meeting its requirements. We demonstrate the analysis of run-time requirements models to yield a self-explanation codified in a domain specific language, and discuss possible future work.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.