34 resultados para end-to-end testing, javascript, application web, single-page application

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports a case study of end-user control in delivery of Web-based electronic services. The case study concentrates the adoption of a Web-based electronic system being implemented in processing student’s admission applications on a Web site. The end-user’s control interface provides information on the detail existing in the Web-based electronic service. This insight into end-user synthesis in developing effective control in Web service environment relates to ease of use in doing the task. To assume the leverage of end-user control strictly on the basis of the Web service usage would limit the purpose of understanding. Rather it is suggested that it would be better to develop an approach to study the end-user ease of use interface in doing the task with the user’s perception towards Web-based interactivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Speciation, despite ongoing gene flow can be studied directly in nature in ring species that comprise two reproductively isolated populations connected by a chain or ring of intergrading populations. We applied three tiers of spatio-temporal analysis (phylogeny/historical biogeography, phylogeography and landscape/population genetics) to the data from mitochondrial and nuclear genomes of eastern Australian parrots of the Crimson Rosella Platycercus elegans complex to understand the history and present genetic structure of the ring they have long been considered to form. A ring speciation hypothesis does not explain the patterns we have observed in our data (e.g. multiple genetic discontinuities, discordance in genotypic and phenotypic assignments where terminal differentiates meet). However, we cannot reject that a continuous circular distribution has been involved in the group's history or indeed that one was formed through secondary contact at the 'ring's' east and west; however, we reject a simple ring-species hypothesis as traditionally applied, with secondary contact only at its east. We discuss alternative models involving historical allopatry of populations. We suggest that population expansion shown by population genetics parameters in one of these isolates was accompanied by geographical range expansion, secondary contact and hybridization on the eastern and western sides of the ring. Pleistocene landscape and sea-level and habitat changes then established the birds' current distributions and range disjunctions. Populations now show idiosyncratic patterns of selection and drift. We suggest that selection and drift now drive evolution in different populations within what has been considered the ring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1.Quantitative tools to describe biological communities are important for conservation and ecological management. The analysis of trophic structure can be used to quantitatively describe communities. Stable isotope analysis is useful to describe trophic organization, but statistical models that allow the identification of general patterns and comparisons between systems/sampling periods have only recently been developed. 2.Here, stable isotope-based Bayesian community-wide metrics are used to investigate patterns in trophic structure in five estuaries that differ in size, sediment yield and catchment vegetation cover (C3/C4): the Zambezi in Mozambique, the Tana in Kenya and the Rianila, the Betsiboka and Pangalanes Canal (sampled at Ambila) in Madagascar. 3.Primary producers, invertebrates and fish of different trophic ecologies were sampled at each estuary before and after the 2010–2011 wet season. Trophic length, estimated based on δ15N, varied between 3·6 (Ambila) and 4·7 levels (Zambezi) and did not vary seasonally for any estuary. Trophic structure differed the most at Ambila, where trophic diversity and trophic redundancy were lower than at the other estuaries. Among the four open estuaries, the Betsiboka and Tana (C4-dominated) had lower trophic diversity than the Zambezi and Rianila (C3-dominated), probably due to the high loads of suspended sediment, which limited the availability of aquatic sources. 4.There was seasonality in trophic structure at Ambila and Betsiboka, as trophic diversity increased and trophic redundancy decreased from the prewet to the postwet season. For Ambila, this probably resulted from the higher variability and availability of sources after the wet season, which allowed diets to diversify. For the Betsiboka, where aquatic productivity is low, this was likely due to a greater input of terrestrial material during the wet season. 5.The comparative analysis of community-wide metrics was useful to detect patterns in trophic structure and identify differences/similarities in trophic organization related to environmental conditions. However, more widespread application of these approaches across different faunal communities in contrasting ecosystems is required to allow identification of robust large-scale patterns in trophic structure. The approach used here may also find application in comparing food web organization before and after impacts or monitoring ecological recovery after rehabilitation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of the World Wide Web (WWW) and the emergence of Internet commerce have given rise to the web as a medium of information exchange. In recent years, the phenomenon has affected the realm of transaction processing systems, as organizations are moving from designing web pages for marketing purposes, to web-based applications that support business-to-business (WEB) and business-to-consumer (B2C) interactions, integrated with databases and other back-end systems (Isakowitz, Bieber et al., 1998). Furthermore, web-enabled applications are increasingly being used to facilitate transactions even between various business units within a single enterprise. Examples of some of the more popular web-enabled applications in use today include airline reservation systems, internet banking, student enrollment systems in universities, and Human Resource (HR) and payroll systems. The prime motive behind the adoption of web-enabled applications are productivity gains due to reduced processing time, decrease in the usage of paper-based documentation and conventional modes of communication (such as letters, fax, or telephone), and improved quality of services to clients. Indeed, web-based solutions are commonly referred to as customer-centric (Li, 2000), which means that they provide user interfaces that do not necessitate high level of computer proficiency. Thus, organizations implement such systems to streamline routine transactions and gain strategic benefits in the process (Nambisan & Wang, 1999), though the latter are to be expected in the long-term. Notwithstanding the benefits of web technology adoption, the web has ample share of challenges for initiators and developers. Many of these challenges are associated with the unique nature of web-enabled applications. Research in the area of web-enabled information systems has revealed several differences with traditional applications. These differences exist with regards to system development methodology, stakeholder involvement, tasks, and technology (Nazareth, 1998). According to Fraternali (1999), web applications are commonly developed using an evolutionary prototyping approach, whereby the simplified version of the application is deployed as a pilot first, in order to gather user feedback. Thus, web-enabled applications typically undergo continuous refinement and evolution (Ginige, 1998; Nazareth, 1998; Siau, 1998; Standing, 2001). Prototype-based development also leads web-enabled information systems to have much shorter development life cycles, but which, unlike traditional applications, are regrettably developed in a rather adhoc fashion (Carstensen & Vogelsang, 2001). However, the principal difference between the two kinds of applications lies in the broad and diverse group of stakeholders associated with web-based information systems (Gordijn, Akkermans, et al., 2000; Russo, 2000; Earl & Khan, 2001; Carter, 2002; Hasselbring, 2002; Standing, 2002; Stevens & Timbrell, 2002). Stakeholders, or organizational members participating in a common business process (Freeman, 1984), vary in their computer competency, business knowledge, language and culture. This diversity is capable of causing conflict between different stakeholder groups with regards to the establishment of system requirements (Pouloudi & Whitley, 1997; Stevens & Timbrell, 2002). Since, web-based systems transcend organizational, departmental, and even national boundaries, the issue of culture poses a significant challenge to the web systems’ initiators and developers (Miles & Snow, 1992; Kumar & van Dissel, 1996; Pouloudi & Whitley, 1996; Li & Williams, 1999).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

End-user experience with information presumably considered as one of the prominent factors shaping the adoption of web-based electronic services. User interfacing with large amount of information the rationale is to deduce the effect in the current web-based task environment. Understanding user’s perception on the basis of the prior experience with information may provide insights into what constitutes in driving those perceptions and their effect in the current and future task in web-based electronic services. The paper lays the theoretical context of end-user experience with information and proceeds further in an attempt to distinguish the role in web-based electronic services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The crucial role of networking in Cloud computing calls for federated management of both computing and networkin resources for end-To-end service provisioning. Application of the Service-Oriented Architecture (SOA) in both Cloud computing an networking enables a convergence of network and Cloud service provisioning. One of the key challenges to high performanc converged network-Cloud service provisioning lies in composition of network and Cloud services with end-To-end performanc guarantee. In this paper, we propose a QoS-Aware service composition approach to tackling this challenging issue. We first present system model for network-Cloud service composition and formulate the service composition problem as a variant of Multi-Constraine Optimal Path (MCOP) problem. We then propose an approximation algorithm to solve the problem and give theoretical analysis o properties of the algorithm to show its effectiveness and efficiency for QoS-Aware network-Cloud service composition. Performanc of the proposed algorithm is evaluated through extensive experiments and the obtained results indicate that the proposed metho achieves better performance in service composition than the best current MCOP approaches Service (QoS).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent emergence of intelligent agent technology and advances in information gathering have been the important steps forward in efficiently managing and using the vast amount of information now available on the Web to make informed decisions. There are, however, still many problems that need to be overcome in the information gathering research arena to enable the delivery of relevant information required by end users. Good decisions cannot be made without sufficient, timely, and correct information. Traditionally it is said that knowledge is power, however, nowadays sufficient, timely, and correct information is power. So gathering relevant information to meet user information needs is the crucial step for making good decisions. The ideal goal of information gathering is to obtain only the information that users need (no more and no less). However, the volume of information available, diversity formats of information, uncertainties of information, and distributed locations of information (e.g. World Wide Web) hinder the process of gathering the right information to meet the user needs. Specifically, two fundamental issues in regard to efficiency of information gathering are mismatch and overload. The mismatch means some information that meets user needs has not been gathered (or missed out), whereas, the overload means some gathered information is not what users need. Traditional information retrieval has been developed well in the past twenty years. The introduction of the Web has changed people's perceptions of information retrieval. Usually, the task of information retrieval is considered to have the function of leading the user to those documents that are relevant to his/her information needs. The similar function in information retrieval is to filter out the irrelevant documents (or called information filtering). Research into traditional information retrieval has provided many retrieval models and techniques to represent documents and queries. Nowadays, information is becoming highly distributed, and increasingly difficult to gather. On the other hand, people have found a lot of uncertainties that are contained in the user information needs. These motivate the need for research in agent-based information gathering. Agent-based information systems arise at this moment. In these kinds of systems, intelligent agents will get commitments from their users and act on the users behalf to gather the required information. They can easily retrieve the relevant information from highly distributed uncertain environments because of their merits of intelligent, autonomy and distribution. The current research for agent-based information gathering systems is divided into single agent gathering systems, and multi-agent gathering systems. In both research areas, there are still open problems to be solved so that agent-based information gathering systems can retrieve the uncertain information more effectively from the highly distributed environments. The aim of this thesis is to research the theoretical framework for intelligent agents to gather information from the Web. This research integrates the areas of information retrieval and intelligent agents. The specific research areas in this thesis are the development of an information filtering model for single agent systems, and the development of a dynamic belief model for information fusion for multi-agent systems. The research results are also supported by the construction of real information gathering agents (e.g., Job Agent) for the Internet to help users to gather useful information stored in Web sites. In such a framework, information gathering agents have abilities to describe (or learn) the user information needs, and act like users to retrieve, filter, and/or fuse the information. A rough set based information filtering model is developed to address the problem of overload. The new approach allows users to describe their information needs on user concept spaces rather than on document spaces, and it views a user information need as a rough set over the document space. The rough set decision theory is used to classify new documents into three regions: positive region, boundary region, and negative region. Two experiments are presented to verify this model, and it shows that the rough set based model provides an efficient approach to the overload problem. In this research, a dynamic belief model for information fusion in multi-agent environments is also developed. This model has a polynomial time complexity, and it has been proven that the fusion results are belief (mass) functions. By using this model, a collection fusion algorithm for information gathering agents is presented. The difficult problem for this research is the case where collections may be used by more than one agent. This algorithm, however, uses the technique of cooperation between agents, and provides a solution for this difficult problem in distributed information retrieval systems. This thesis presents the solutions to the theoretical problems in agent-based information gathering systems, including information filtering models, agent belief modeling, and collection fusions. It also presents solutions to some of the technical problems in agent-based information systems, such as document classification, the architecture for agent-based information gathering systems, and the decision in multiple agent environments. Such kinds of information gathering agents will gather relevant information from highly distributed uncertain environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent empirical studies in the area of mobile application testing indicate the need for specific testing techniques and methods for mobile applications. This is due to mobile applications being significantly different than traditional web and desktop applications, particularly in terms of the physical constraints of mobile devices and the very different features of their operating systems. In this paper, we presented a multiple case-study involving four software development companies in the area of mobile and smartphones application. We aimed to identify testing techniques currently being applied by developers and challenges that they are facing. Our principle results are that many industrial teams seem to lack sufficient knowledge on how to test mobile applications, particularly in the areas of mobile application life-cycle conformance, context-awareness, and integration testing. We also found that there is no formal testing approach or methodology that can facilitate a development team to systematically test a critical mobile application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Data on the use of targeted therapies at the end of life are scarce. This study reviews the pattern of use of targeted and potentially futile, toxic, or costly therapies at an Australian cancer centre. METHODS: This retrospective single-centre review of data from patients who died within 3 months of having targeted therapy examined demographic characteristics, types of cancers, types of therapy, age, and lines of prior therapy. RESULTS: Over 24 months, two groups were analysed. Firstly, 889 patients died with 107 patients who were prescribed targeted therapy. Secondly, 457 patients were treated with targeted therapies with 52 patients, (11 %) dying within 3 months. To focus on the 52 patients: median age was 69 years, 65 % were men and 35 % were women, 50 % had haematologic cancers and 50 % had solid tumours. Ten therapeutic agents were represented: a higher total number of deaths among those prescribed erlotinib, bevacizumab, and rituximab. There were no deaths within 3 months of treatment with trastuzumab, ipilimumab, or vemurafenib. The targeted therapy was the first-line treatment in 54 %, second in 15 %, and third and beyond in 15 %. The patient's sex and type of cancer had no statistically significant influence on death within 3 months of targeted treatment. CONCLUSIONS: The use of targeted therapy at the end of life in this single-centre descriptive study was lower than documented in other studies. There is a need to prospectively document the factors leading to this prescribing behaviour to guide future protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Designing a successful web project requires understanding not only of its owner's business and technological needs, as well as having the substantial management and development experience, but it also depends on a thorough knowledge of the system's application domain and of other existing systems in the domain. In order to gather such domain knowledge, it is necessary to identify the nature of the proposed web services venture with regards to other similar services offered in the domain, the business setting of enterprises that initiate such ventures, the various types of customers involved, and how these factors translate into requirements. In this paper, we present an approach to studying the domain of web-enabled Human Resource and payroll services with the aim of attaining design knowledge that would ensure customer satisfaction and could eventually pave the way to the successful implementation of web-enabled services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systematic usability testing of the library website was unheard of at Deakin University Library three years ago. However, over the last two years, a large scale usability testing program has evolved and various methodologies have been trialled and tested by the team. This paper will discuss the methodologies used by the team, and the changes that were made to the Library’s search interfaces as a result of the studies. The paper will provide useful insights on what we did right, and on what we need to do differently in future usability studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To reduce weight and improve passenger safety there is an increased need in the automotive industry to use Ultra High Strength Steel (UHSS) for structural and crash components. However, the application of UHSS is restricted by their limited formability and the difficulty of forming them in conventional stamping. An alternative method of manufacturing structural auto body parts from UHSS is the flexible roll forming process, which allows the manufacture of metal sheet with high strength and limited ductility into complex and weight-optimized components. One major problem in the flexible roll forming of UHSS is the web-warping defect, which is the deviation in height of the web area over the length of the profile. It has been shown that web-warping is strongly dependant to the permanent longitudinal strain formed in the flange of the part. Flexible roll forming is a continuous process with many roll stands, which makes numerical analysis extremely time intensive and computationally expensive. An analytical model of web-warping is therefore critical to improve design efficiency during the early process design stage before FEA is applied. This paper establishes for the first time an analytical model for the prediction of web-warping for the flexible roll forming of a section with variable width. The model is based on evaluating longitudinal edge strain in the flange of the part. This information is then used in combination with a simple geometrical model to investigate the relationship between web-warping and longitudinal strain with respect to process parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Present operating systems are not built to support parallel computing––they do not provide services to manage parallelism, i.e., to globally manage parallel processes and computational resources. The cluster operating environments that are used to assist the execution of parallel applications do not provide support for both programming paradigms, message passing (MP) or distributed shared memory (DSM)––they are mainly offered as separate components implemented at the user level as library and independent server processes. Due to poor operating systems users must deal with clusters as a set of independent computers rather than to see this cluster as a single powerful computer. A single system image (SSI) of the cluster is not offered to users. There is a need for an operating system for clusters. We claim and demonstrate in this paper that it is possible to develop a cluster operating system that is able to efficiently manage parallelism; use cluster resources efficiently; support MP in the form of standard MP and PVM, and DSM; offer SSI; and make it easy to use. We show that to achieve these aims this operating system should inherit many features of a distributed operating system and provide new services which address the needs of parallel processes, cluster's resources, and application developers. In order to substantiate the claim the first version of a cluster operating system managing parallelism and offering SSI, called GENESIS, has been developed.