321 resultados para developer


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The traditional waterfall software life cycle model has several weaknesses. One problem is that a working version of a system is unavailable until a late stage in the development; any omissions and mistakes in the specification undetected until that stage can be costly to maintain. The operational approach which emphasises the construction of executable specifications can help to remedy this problem. An operational specification may be exercised to generate the behaviours of the specified system, thereby serving as a prototype to facilitate early validation of the system's functional requirements. Recent ideas have centred on using an existing operational method such as JSD in the specification phase of object-oriented development. An explicit transformation phase following specification is necessary in this approach because differences in abstractions between the two domains need to be bridged. This research explores an alternative approach of developing an operational specification method specifically for object-oriented development. By incorporating object-oriented concepts in operational specifications, the specifications have the advantage of directly facilitating implementation in an object-oriented language without requiring further significant transformations. In addition, object-oriented concepts can help the developer manage the complexity of the problem domain specification, whilst providing the user with a specification that closely reflects the real world and so the specification and its execution can be readily understood and validated. A graphical notation has been developed for the specification method which can capture the dynamic properties of an object-oriented system. A tool has also been implemented comprising an editor to facilitate the input of specifications, and an interpreter which can execute the specifications and graphically animate the behaviours of the specified systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-adaptation is emerging as an increasingly important capability for many applications, particularly those deployed in dynamically changing environments, such as ecosystem monitoring and disaster management. One key challenge posed by Dynamically Adaptive Systems (DASs) is the need to handle changes to the requirements and corresponding behavior of a DAS in response to varying environmental conditions. Berry et al. previously identified four levels of RE that should be performed for a DAS. In this paper, we propose the Levels of RE for Modeling that reify the original levels to describe RE modeling work done by DAS developers. Specifically, we identify four types of developers: the system developer, the adaptation scenario developer, the adaptation infrastructure developer, and the DAS research community. Each level corresponds to the work of a different type of developer to construct goal model(s) specifying their requirements. We then leverage the Levels of RE for Modeling to propose two complementary processes for performing RE for a DAS. We describe our experiences with applying this approach to GridStix, an adaptive flood warning system, deployed to monitor the River Ribble in Yorkshire, England.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pervasive environments are characterised by highly heterogeneous services and mobile devices with dynamic availability. Approaches such as that proposed by the Connect project provide means to enable such systems to be discovered and composed, through mediation where necessary. As services appear and disappear, the set of feasible compositions changes. In such a pervasive environment, a designer encounters two related challenges: what goals it is reasonable to pursue in the current context and how to use the services presently available to achieve his goals. This paper proposes an approach to design service compositions, facilitating an interactive process to find the trade-off between the possible and the desirable. Following our approach, the system finds at runtime, where possible, compositions related to the developer's requirements. This process can realise the intent the developer specifies at design time, taking into account the services available at runtime, without a prohibitive level of pre-specification, inappropriate for such dynamic environments. © 2012 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The behaviour of self adaptive systems can be emergent. The difficulty in predicting the system's behaviour means that there is scope for the system to surprise its customers and its developers. Because its behaviour is emergent, a self-adaptive system needs to garner confidence in its customers and it needs to resolve any surprise on the part of the developer during testing and mainteinance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system's behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, a means needs to be found to explain the current behaviour of the system and the reasons that brought that behaviour about. We propose the use of goal-based models during runtime to offer self-explanation of how a system is meeting its requirements, and why the means of meeting these were chosen. We discuss the results of early experiments in self-explanation, and set out future work. © 2012 C.E.S.A.M.E.S.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of spreadsheets has become routine in all aspects of business with usage growing across a range of functional areas and a continuing trend towards end user spreadsheet development. However, several studies have raised concerns about the accuracy of spreadsheet models in general, and of end user developed applications in particular, raising the risk element for users. High error rates have been discovered, even though the users/developers were confident that their spreadsheets were correct. The lack of an easy to use, context-sensitive validation methodology has been highlighted as a significant contributor to the problems of accuracy. This paper describes experiences in using a practical, contingency factor-based methodology for validation of spreadsheet-based DSS. Because the end user is often both the system developer and a stakeholder, the contingency factor-based validation methodology may need to be used in more than one way. The methodology can also be extended to encompass other DSS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The deployment of bioenergy technologies is a key part of UK and European renewable energy policy. A key barrier to the deployment of bioenergy technologies is the management of biomass supply chains including the evaluation of suppliers and the contracting of biomass. In the undeveloped biomass for energy market buyers of biomass are faced with three major challenges during the development of new bioenergy projects. What characteristics will a certain supply of biomass have, how to evaluate biomass suppliers and which suppliers to contract with in order to provide a portfolio of suppliers that best satisfies the needs of the project and its stakeholder group whilst also satisfying crisp and non-crisp technological constraints. The problem description is taken from the situation faced by the industrial partner in this research, Express Energy Ltd. This research tackles these three areas separately then combines them to form a decision framework to assist biomass buyers with the strategic sourcing of biomass. The BioSS framework. The BioSS framework consists of three modes which mirror the development stages of bioenergy projects. BioSS.2 mode for early stage development, BioSS.3 mode for financial close stage and BioSS.Op for the operational phase of the project. BioSS is formed of a fuels library, a supplier evaluation module and an order allocation module, a Monte-Carlo analysis module is also included to evaluate the accuracy of the recommended portfolios. In each mode BioSS can recommend which suppliers should be contracted with and how much material should be purchased from each. The recommended blend should have chemical characteristics within the technological constraints of the conversion technology and also best satisfy the stakeholder group. The fuels library is made up from a wide variety of sources and contains around 100 unique descriptions of potential biomass sources that a developer may encounter. The library takes a wide data collection approach and has the aim of allowing for estimates to be made of biomass characteristics without expensive and time consuming testing. The supplier evaluation part of BioSS uses a QFD-AHP method to give importance weightings to 27 different evaluating criteria. The evaluating criteria have been compiled from interviews with stakeholders and policy and position documents and the weightings have been assigned using a mixture of workshops and expert interview. The weighted importance scores allow potential suppliers to better tailor their business offering and provides a robust framework for decision makers to better understand the requirements of the bioenergy project stakeholder groups. The order allocation part of BioSS uses a chance-constrained programming approach to assign orders of material between potential suppliers based on the chemical characteristics of those suppliers and the preference score of those suppliers. The optimisation program finds the portfolio of orders to allocate to suppliers to give the highest performance portfolio in the eyes of the stakeholder group whilst also complying with technological constraints. The technological constraints can be breached if the decision maker requires by setting the constraint as a chance-constraint. This allows a wider range of biomass sources to be procured and allows a greater overall performance to be realised than considering crisp constraints or using deterministic programming approaches. BioSS is demonstrated against two scenarios faced by UK bioenergy developers. The first is a large scale combustion power project, the second a small scale gasification project. The Bioss is applied in each mode for both scenarios and is shown to adapt the solution to the stakeholder group importance and the different constraints of the different conversion technologies whilst finding a globally optimal portfolio for stakeholder satisfaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Community acceptance has been identified as one of the key requirements for a sustainable bioenergy project. However less attention has been paid to this aspect from developing nations and small projects perspective. Therefore this research examines the role of community acceptance for sustainable small scale bioenergy projects in India. While addressing the aim, this work identifies influence of community over bioenergy projects, major concerns of communities regarding bioenergy projects and factors influencing perceptions of communities about bioenergy projects. The empirical research was carried out on four bioenergy companies in India as case studies. It has been identified that communities have significant influence over bioenergy projects in India. Local air pollution, inappropriate storage of by-products and credibility of developer are identified as some of the important concerns. Local energy needs, benefits to community from bioenergy companies, level of trust on company and relationship between company and the community are some of the prime factors which influence community's perception on bioenergy projects. This research sheds light on important aspects related to community acceptance of bioenergy projects, and this information would help practitioners in understanding the community perceptions and take appropriate actions to satisfy them. © 2014 Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The behaviour of self adaptive systems can be emergent, which means that the system’s behaviour may be seen as unexpected by its customers and its developers. Therefore, a self-adaptive system needs to garner confidence in its customers and it also needs to resolve any surprise on the part of the developer during testing and maintenance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system’s behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, we propose the use of goal-based requirements models at runtime to offer self-explanation of how a system is meeting its requirements. We demonstrate the analysis of run-time requirements models to yield a self-explanation codified in a domain specific language, and discuss possible future work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The article presents a new method to estimating usability of a user interface based on its model. The principal features of the method are: creation of an expandable knowledge base of usability defects, detection defects based on the interface model, within the design phase, and information to the developer not only about existence of defects but also advice on their elimination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A munkaerő-piaci követelményeknek megfelelően kialakított képzések versenyképesebbek a társaiknál. A napjainkban is zajló magyar felsőoktatási reform központi elemét alkotja a képzések ennek megfelelően történő átalakítása. Mindez létjogosultságot ad egy olyan rendszernek, amely a Budapesti Corvinus Egyetem gazdaságinformatikus BSc-képzésének kompetenciaelemeit kívánja vizsgálni az állásajánlatokban megnyilvánuló munkaerő-piaci igények tükrében. Az ontológiaalapú módszertan egy egységes fogalmi kört biztosít a munkaerőpiac eltérő szemléletű oldalán kifejlesztett modellek egységesítésére és összehasonlítására. ____ Tendencies can be observed on international and domestic levels that call for restructuring of higher education according to the needs of labor market. This paper presents an information system that can investigate the compliance of education programs and current labor market needs. Competences serve as a basis for this compliance checking, which is built on ontologybased approach. Having examined the distribution of roles (developer, operator etc.) appeared in IT job offers in time and space, a prototype of this system will be showed, related to Business Informatics degree program at Corvinus University of Budapest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present article assesses agency theory related problems contributing to the fall of shopping centers. The negative effects of the financial and economic downturn started in 2008 were accentuated in emerging markets like Romania. Several shopping centers were closed or sold through bankruptcy proceedings or forced execution. These failed shopping centers, 10 in number, were selected in order to assess agency theory problems contributing to the failure of shopping centers; as research method qualitative multiple cases-studies is used. Results suggest, that in all of the cases the risk adverse behavior of the External Investor- Principal, lead to risk sharing problems and subsequently to the fall of the shopping centers. In some of the cases Moral Hazard (lack of Developer-Agent’s know-how and experience) as well as Adverse Selection problems could be identified. The novelty of the topic for the shopping center industry and the empirical evidences confer a significant academic and practical value to the present article.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this work was to enable the recognition of human gestures through the development of a computer program. The program created captures the gestures executed by the user through a camera attached to the computer and sends it to the robot command referring to the gesture. They were interpreted in total ve gestures made by human hand. The software (developed in C ++) widely used the computer vision concepts and open source library OpenCV that directly impact the overall e ciency of the control of mobile robots. The computer vision concepts take into account the use of lters to smooth/blur the image noise reduction, color space to better suit the developer's desktop as well as useful information for manipulating digital images. The OpenCV library was essential in creating the project because it was possible to use various functions/procedures for complete control lters, image borders, image area, the geometric center of borders, exchange of color spaces, convex hull and convexity defect, plus all the necessary means for the characterization of imaged features. During the development of the software was the appearance of several problems, as false positives (noise), underperforming the insertion of various lters with sizes oversized masks, as well as problems arising from the choice of color space for processing human skin tones. However, after the development of seven versions of the control software, it was possible to minimize the occurrence of false positives due to a better use of lters combined with a well-dimensioned mask size (tested at run time) all associated with a programming logic that has been perfected over the construction of the seven versions. After all the development is managed software that met the established requirements. After the completion of the control software, it was observed that the overall e ectiveness of the various programs, highlighting in particular the V programs: 84.75 %, with VI: 93.00 % and VII with: 94.67 % showed that the nal program performed well in interpreting gestures, proving that it was possible the mobile robot control through human gestures without the need for external accessories to give it a better mobility and cost savings for maintain such a system. The great merit of the program was to assist capacity in demystifying the man set/machine therefore uses an easy and intuitive interface for control of mobile robots. Another important feature observed is that to control the mobile robot is not necessary to be close to the same, as to control the equipment is necessary to receive only the address that the Robotino passes to the program via network or Wi-Fi.