885 resultados para Data centres


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The firm is faced with a decision concerning the nature of intra-organizational exchange relationships with internal human resources and the nature or inter-organizational exchange relationships with market firms. In both situations, the firm can develop an exchange that ranges from a discrete exchange to a relational exchange. Transaction Cost Economics (TCE) and the Resource Dependency View (RDV) represent alternative efficiency-based explanations fo the nature of the exchange relationship. The aim of the paper is to test these two theories in respect of air conditioning maintenance in retail centres. Multiple sources of information are genereated from case studies of Australian retail centres to test these theories in respoect of internalized operations management (concerning strategic aspects of air conditioning maintenance) and externalized planned routine air conditioning maintenance. The analysis of the data centres on pattern matching. It is concluded that the data supports TCE - on the basis of a development in TCE's contractual schema. Further research is suggested towards taking a pluralistic stance and developing a combined efficiency and power hypothesis - upon which Williamson has speculated. For practice, the conclusions also offer a timely cautionary note concerning the adoption of one approach in all exchange relationships.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing is a latest new computing paradigm where applications, data and IT services are provided over the Internet. Cloud computing has become a main medium for Software as a Service (SaaS) providers to host their SaaS as it can provide the scalability a SaaS requires. The challenges in the composite SaaS placement process rely on several factors including the large size of the Cloud network, SaaS competing resource requirements, SaaS interactions between its components and SaaS interactions with its data components. However, existing applications’ placement methods in data centres are not concerned with the placement of the component’s data. In addition, a Cloud network is much larger than data center networks that have been discussed in existing studies. This paper proposes a penalty-based genetic algorithm (GA) to the composite SaaS placement problem in the Cloud. We believe this is the first attempt to the SaaS placement with its data in Cloud provider’s servers. Experimental results demonstrate the feasibility and the scalability of the GA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recently, Software as a Service (SaaS) in Cloud computing, has become more and more significant among software users and providers. To offer a SaaS with flexible functions at a low cost, SaaS providers have focused on the decomposition of the SaaS functionalities, or known as composite SaaS. This approach has introduced new challenges in SaaS resource management in data centres. One of the challenges is managing the resources allocated to the composite SaaS. Due to the dynamic environment of a Cloud data centre, resources that have been initially allocated to SaaS components may be overloaded or wasted. As such, reconfiguration for the components’ placement is triggered to maintain the performance of the composite SaaS. However, existing approaches often ignore the communication or dependencies between SaaS components in their implementation. In a composite SaaS, it is important to include these elements, as they will directly affect the performance of the SaaS. This paper will propose a Grouping Genetic Algorithm (GGA) for multiple composite SaaS application component clustering in Cloud computing that will address this gap. To the best of our knowledge, this is the first attempt to handle multiple composite SaaS reconfiguration placement in a dynamic Cloud environment. The experimental results demonstrate the feasibility and the scalability of the GGA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a cautious argument for re-thinking both the nature and the centrality of the one-to-one teacher/student relationship in contemporary pedagogy. A case is made that learning in and for our times requires us to broaden our understanding of pedagogical relations beyond the singularity of the teacher/student binary and to promote the connected teacher as better placed to lead learning for these times. The argument proceeds in three parts: first, a characterization of our times as defined increasingly by the digital knowledge explosion of Big Data; second, a re-thinking of the nature of pedagogical relationships in the context of Big Data; and third, an account of the ways in which leaders can support their teachers to become more effective in leading learning by being more closely connected to their professional colleagues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most online assessment systems now incorporate social networking features, and recent developments in social media spaces include protocols that allow the synchronisation and aggregation of data across multiple user profiles. In light of these advances and the concomitant fear of data sharing in secondary school education this papers provides important research findings about generic features of online social networking, which educators can use to make sound and efficient assessments in collaboration with their students and colleagues. This paper reports on a design experiment in flexible educational settings that challenges the dichotomous legacy of success and failure evident in many assessment activities for at-risk youth. Combining social networking practices with the sociology of education the paper proposes that assessment activities are best understood as a negotiable field of exchange. In this design experiment students, peers and educators engage in explicit, "front-end" assessment (Wyatt-Smith, 2008) to translate digital artefacts into institutional, and potentiality economic capital without continually referring to paper based pre-set criteria. This approach invites students and educators to use social networking functions to assess “work in progress” and final submissions in collaboration, and in doing so assessors refine their evaluative expertise and negotiate the value of student’s work from which new criteria can emerge. The mobile advantages of web-based technologies aggregate, externalise and democratise this transparent assessment model for most, if not all, student work that can be digitally represented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Patients with chest discomfort or other symptoms suggestive of acute coronary syndrome (ACS) are one of the most common categories seen in many Emergency Departments (EDs). While the recognition of patients at high-risk of ACS has improved steadily, identifying the majority of chest pain presentations who fall into the low-risk group remains a challenge. Research in this area needs to be transparent, robust, applicable to all hospitals from large tertiary centres to rural and remote sites, and to allow direct comparison between different studies with minimum patient spectrum bias. A standardised approach to the research framework using a common language for data definitions must be adopted to achieve this. The aim was to create a common framework for a standardised data definitions set that would allow maximum value when extrapolating research findings both within Australasian ED practice, and across similar populations worldwide. Therefore a comprehensive data definitions set for the investigation of non-traumatic chest pain patients with possible ACS was developed, specifically for use in the ED setting. This standardised data definitions set will facilitate ‘knowledge translation’ by allowing extrapolation of useful findings into the real-life practice of emergency medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emergency departments (EDs) are often the first point of contact with an abused child. Despite legal mandate, the reporting of definite or suspected abusive injury to child safety authorities by ED clinicians varies due to a number of factors including training, access to child safety professionals, departmental culture and a fear of ‘getting it wrong’. This study examined the quality of documentation and coding of child abuse captured by ED based injury surveillance data and ED medical records in the state of Queensland and the concordance of these data with child welfare records. A retrospective medical record review was used to examine the clinical documentation of almost 1000 injured children included in the Queensland Injury Surveillance Unit database (QISU) from 10 hospitals in urban and rural centres. Independent experts re-coded the records based on their review of the notes. A data linkage methodology was then used to link these records with records in the state government’s child welfare database. Cases were sampled from three sub-groups according to the surveillance intent codes: Maltreatment by parent, Undetermined and Unintentional injury. Only 0.1% of cases coded as unintentional injury were recoded to maltreatment by parent, while 1.2% of cases coded as maltreatment by parent were reclassified as unintentional and 5% of cases where the intent was undetermined by the triage nurse were recoded as maltreatment by parent. Quality of documentation varied across type of hospital (tertiary referral centre, children’s, urban, regional and remote). Concordance of health data with child welfare data varied across patient subgroups. Outcomes from this research will guide initiatives to improve the quality of intentional child injury surveillance systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to various regulations that require data and operations to reside in specific geographic locations. Thus, cloud users may want to be sure that their stored data have not been relocated into unknown geographic regions that may compromise the security of their stored data. Albeshri et al. (2012) combined proof of storage (POS) protocols with distance-bounding protocols to address this problem. However, their scheme involves unnecessary delay when utilising typical POS schemes due to computational overhead at the server side. The aim of this paper is to improve the basic GeoProof protocol by reducing the computation overhead at the server side. We show how this can maintain the same level of security while achieving more accurate geographic assurance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, highly motorised countries, such as Australia, have witnessed significant improvements in population health through reductions in fatalities and injuries from road traffic crashes. In Australia, concerted efforts have been made to reduce the road trauma burden since road fatalities reached their highest level in in the early 1970s. Since that time, many improvements have been made drawing on various disciplines to reduce the trauma burden (e.g., road and vehicle design, road user education, traffic law enforcement practices and enforcement technologies). While road fatalities have declined significantly since the mid-1970s, road trauma remains a serious public health concern in Australia. China has recently become the largest car market in the world (Ma, Li, Zhou, Duan, & Bishai, 2012). This rapid motorisation has been accompanied by substantial expansion of the road network as well as a large road trauma burden. Road traffic injuries are a major cause of death in China, reported as accounting for one third of all injury-deaths between 2002 and 2006 (Ma et al., 2012). In common with Australia, China has experienced a reported decline in fatalities since 2002 (see Hu, Wen & Baker, 2008). However, there remains a strong need for action in this area as rates of motorisation continue to climb in China. In Australia, a wide range of organisations have contributed to the improvements in road safety including government agencies, professional organisations, advocacy groups and research centres. In particular, Australia has several highly regarded and multi-disciplinary, university-based research centres that work across a range of road safety fields, including engineering, intelligent transportation systems, the psychology of road user behaviour, and traffic law enforcement. Besides conducting high-quality research, these centres fulfil an important advocacy role in promoting safer road use and facilitating collaborations with government and other agencies, at both the national and international level. To illustrate the role of these centres, an overview will be provided of the Centre for Accident Research and Road Safety-Queensland (CARRS-Q), which was established in 1996 and has gone on to become a recognised world-leader in road safety and injury prevention research. The Centre’s research findings are used to provide evidence-based recommendations to government and have directly contributed to promoting safer road use in Australia. Since 2006, CARRS-Q has also developed strong collaborative links with various universities and organisations in China to assist in building understanding, connections and capacity to assist in reducing the road trauma burden. References Hu, G., Wen, M., Baker, T. D., & Baker, S. P. (2008). Road-traffic deaths in China, 1985–2005: threat and opportunity. Injury Prevention, 14, 149-153. Ma, S., Li, Q., Zhou, M., Duan, L., & Bishai, D. (2012). Road Traffic Injury in China: A Review of National Data Sources. Traffic Injury Prevention, 13(S1), 57-63.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

QUT Library Research Support has simplified and streamlined the process of research data management planning, storage, discovery and reuse through collaboration and the use of integrated and tailored online tools, and a simplification of the metadata schema. This poster presents the integrated data management services a QUT, including QUT’s Data Management Planning Tool, Research Data Finder, Spatial Data Finder and Software Finder, and information on the simplified Registry Interchange Format – Collections and Services (RIF-CS) Schema. The QUT Data Management Planning (DMP) Tool was built using the Digital Curation Centre’s DMP Online Tool and modified to QUT’s needs and policies. The tool allows researchers and Higher Degree Research students to plan how to handle research data throughout the active phase of their research. The plan is promoted as a ‘live’ document’ and researchers are encouraged to update it as required. The information entered into the plan can be made private or shared with supervisors, project members and external examiners. A plan is mandatory when requesting storage space on the QUT Research Data Storage Service. QUT’s Research Data Finder is integrated with QUT’s Academic Profiles and the Data Management Planning Tool to create a seamless data management process. This process aims to encourage the creation of high quality rich records which facilitate discovery and reuse of quality data. The Registry Interchange Format – Collections and Services (RIF-CS) Schema that is used in the QUT Research Data Finder was simplified to “RIF-CS lite” to reflect mandatory and optional metadata requirements. RIF-CS lite removed schema fields that were underused or extra to the needs of the users and system. This has reduced the amount of metadata fields required from users and made integration of systems a far more simple process where field content is easily shared across services making the process of collecting metadata as transparent as possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volcanic eruption centres of the mostly 4.5 Ma-5000 BP Newer Volcanics Province in the Hamilton area of southeastern Australia were examined in detail using a multifaceted approach, including ground truthing and analysis of ArcGIS Total Magnetic Intensity and seamless geology data, NASA Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) digital elevation models and Google Earth satellite image interpretation. Sixteen eruption centres were recognised in the Hamilton area, including three previously unrecorded volcanoes-one of which, the Cas Maar, constitutes the northernmost maar-cone volcanic complex in the Western Plains subprovince. Seven previously allocated eruption centres were placed into question based on field and laboratory observations. Three phases of volcanic activity have been suggested by other authors and are interpreted to correlate with ages of >4 Ma, ca 2 Ma and <0.5 Ma, which may be further subdivided based on preservation of outcrop. Geochemical compositions of the dominantly basaltic products become increasingly alkaline and enriched in incompatible elements from Phases 1 to 2, with Phase 3 eruptions both covering the entire geochemical range and extending into increasingly enriched compositions. This research highlights the importance of a multifaceted approach to landform mapping and demonstrates that additional volcanic centres may yet be discovered in the Newer Volcanics Province