969 resultados para Point cloud
Resumo:
Energy prices are highly volatile and often feature unexpected spikes. It is the aim of this paper to examine whether the occurrence of these extreme price events displays any regularities that can be captured using an econometric model. Here we treat these price events as point processes and apply Hawkes and Poisson autoregressive models to model the dynamics in the intensity of this process.We use load and meteorological information to model the time variation in the intensity of the process. The models are applied to data from the Australian wholesale electricity market, and a forecasting exercise illustrates both the usefulness of these models and their limitations when attempting to forecast the occurrence of extreme price events.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
In this research, we suggest appropriate information technology (IT) governance structures to manage the cloud computing resources. The interest in acquiring IT resources a utility is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to ensure its effective management and fit into existing business processes to leverage the promised opportunities. Using a mixed method design, we identified four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. These governance structures ensure appropriate direction of cloud computing resources from its acquisition to fit into the organizations business processes.
Resumo:
This research suggests information technology (IT) governance structures to manage cloud computing resources. The interest in acquiring IT resources as a utility from the cloud is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to manage the cloud resources. The subsequent decisions from these governance structures will ensure effective management of cloud resources. This management will facilitate a better fit of cloud resources into organizations existing processes to achieve business (process-level) and financial (firm-level) objectives. Using a triangulation approach, we suggest four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. We also propose that these governance structures would relate to organizations cloud-related business objectives directly and indirectly to cloud-related financial objectives. Perceptive field survey data from actual and prospective cloud service adopters confirmed that the suggested structures would contribute directly to cloud-related business objectives and indirectly to cloud-related financial objectives.
Resumo:
Many grid connected PV installations consist of a single series string of PV modules and a single DC-AC inverter. This efficiency of this topology can be enhanced with additional low power, low cost per panel converter modules. Most current flows directly in the series string which ensures high efficiency. However parallel Cúk or buck-boost DC-DC converters connected across each adjacent pair of modules now support any desired current difference between series connected PV modules. Each converter “shuffles” the desired difference in PV module currents between two modules and so on up the string. Spice simulations show that even with poor efficiency, these modules can make a significant improvement to the overall power which can be recovered from partially shaded PV strings.
Resumo:
The purpose of this paper is to provide an evolutionary perspective of cloud computing (CC) by integrating two previously disparate literatures: CC and information technology outsourcing (ITO). We review the literature and develop a framework that highlights the demand for the CC service, benefits, risks, as well as risk mitigation strategies that are likely to influence the success of the service. CC success in organisations and as a technology overall is a function of (i) the outsourcing decision and supplier selection, (ii) contractual and relational governance, and (iii) industry standards and legal framework. Whereas CC clients have little control over standards and/or the legal framework, they are able to influence other factors to maximize the benefits while limiting the risks. This paper provides guidelines for (potential) cloud computing users with respect to the outsourcing decision, vendor selection, service-level-agreements, and other issues that need to be addressed when opting for CC services. We contribute to the literature by providing an evolutionary and holistic view of CC that draws on the extensive literature and theory of ITO. We conclude the paper with a number of research paths that future researchers can follow to advance the knowledge in this field.
Resumo:
Several fringing coral reefs in Moreton Bay, Southeast Queensland, some 300 km south of the Great Barrier Reef (GBR), are set in a relatively high latitude, estuarine environment that is considered marginal for coral growth. Previous work indicated that these marginal reefs, as with many fringing reefs of the inner GBR, ceased accreting in the mid-Holocene. This research presents for the first time data from the subsurface profile of the mid-Holocene fossil reef at Wellington Point comprising U/Th dates of in situ and framework corals, and trace element analysis from the age constrained carbonate fragments. Based on trace element proxies the palaeo-water quality during reef accretion was reconstructed. Results demonstrate that the reef initiated more than 7,000 yr BP during the post glacial transgression, and the initiation progressed to the west as sea level rose. In situ micro-atolls indicate that sea level was at least 1 m above present mean sea level by 6,680 years ago. The reef remained in "catch-up" mode, with a seaward sloping upper surface, until it stopped aggrading abruptly at ca 6,000 yr BP; no lateral progradation occurred. Changes in sediment composition encountered in the cores suggest that after the laterite substrate was covered by the reef, most of the sediment was produced by the carbonate factory with minimal terrigenous influence. Rare earth element, Y and Ba proxies indicate that water quality during reef accretion was similar to oceanic waters, considered suitable for coral growth. A slight decline in water quality on the basis of increased Ba in the later stages of growth may be related to increased riverine input and partial closing up of the bay due to either tidal delta progradation, climatic change and/or slight sea level fall. The age data suggest that termination of reef growth coincided with a slight lowering of sea level, activation of ENSO and consequent increase in seasonality, lowering of temperatures and the constrictions to oceanic flushing. At the cessation of reef accretion the environmental conditions in the western Moreton Bay were changing from open marine to estuarine. The living coral community appears to be similar to the fossil community, but without the branching Acropora spp. that were more common in the fossil reef. In this marginal setting coral growth periods do not always correspond to periods of reef accretion due to insufficient coral abundance. Due to several environmental constraints modern coral growth is insufficient for reef growth. Based on these findings Moreton Bay may be unsuitable as a long term coral refuge for most species currently living in the GBR.
Resumo:
The mammalian target of rapamycin (mTOR) is a highly conserved atypical serine-threonine kinase that controls numerous functions essential for cell homeostasis and adaptation in mammalian cells via 2 distinct protein complex formations. Moreover, mTOR is a key regulatory protein in the insulin signalling cascade and has also been characterized as an insulin-independent nutrient sensor that may represent a critical mediator in obesity-related impairments of insulin action in skeletal muscle. Exercise characterizes a remedial modality that enhances mTOR activity and subsequently promotes beneficial metabolic adaptation in skeletal muscle. Thus, the metabolic effects of nutrients and exercise have the capacity to converge at the mTOR protein complexes and subsequently modify mTOR function. Accordingly, the aim of the present review is to highlight the role of mTOR in the regulation of insulin action in response to overnutrition and the capacity for exercise to enhance mTOR activity in skeletal muscle.
Resumo:
Purpose The use of intravascular devices is associated with a number of potential complications. Despite a number of evidence-based clinical guidelines in this area, there continues to be nursing practice discrepancies. This study aims to examine nursing practice in a cancer care setting to identify nursing practice and areas for improvement respective to best available evidence. Methods A point prevalence survey was undertaken in a tertiary cancer care centre in Queensland, Australia. On a randomly selected day, four nurses assessed intravascular device related nursing practices and collected data using a standardized survey tool. Results 58 inpatients (100%) were assessed. Forty-eight (83%) had a device in situ, comprising 14 Peripheral Intravenous Catheters (29.2%), 14 Peripherally Inserted Central Catheters (29.2%), 14 Hickman catheters (29.2%) and six Port-a-Caths (12.4%). Suboptimal outcomes such as incidences of local site complications, incorrect/inadequate documentation, lack of flushing orders, and unclean/non intact dressings were observed. Conclusions This study has highlighted a number of intravascular device related nursing practice discrepancies compared with current hospital policy. Education and other implementation strategies can be applied to improve nursing practice. Following education strategies, it will be valuable to repeat this survey on a regular basis to provide feedback to nursing staff and implement strategies to improve practice. More research is required to provide evidence to clinical practice with regards to intravascular device related consumables, flushing technique and protocols.
Resumo:
The topic of “the cloud” has attracted significant attention throughout the past few years (Cherry 2009; Sterling and Stark 2009) and, as a result, academics and trade journals have created several competing definitions of “cloud computing” (e.g., Motahari-Nezhad et al. 2009). Underpinning this article is the definition put forward by the US National Institute of Standards and Technology, which describes cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction” (Garfinkel 2011, p. 3). Despite the lack of consensus about definitions, however, there is broad agreement on the growing demand for cloud computing. Some estimates suggest that spending on cloudrelated technologies and services in the next few years may climb as high as USD 42 billion/year (Buyya et al. 2009).
Resumo:
Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].
Resumo:
Point-to-point speed cameras are a relatively new and innovative technological approach to speed enforcement that is increasingly been used in a number of highly motorised countries. Previous research has provided evidence of the positive impact of this approach on vehicle speeds and crash rates, as well as additional traffic related outcomes such as vehicle emissions and traffic flow. This paper reports on the conclusions and recommendations of a large-scale project involving extensive consultation with international and domestic (Australian) stakeholders to explore the technological, operational, and legislative characteristics associated with the technology. More specifically, this paper provides a number of recommendations for better practice regarding the implementation of point-to-point speed enforcement in the Australian and New Zealand context. The broader implications of the research, as well as directions for future research, are also discussed.
Resumo:
The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to various regulations that require data and operations to reside in specific geographic locations. Thus, cloud users may want to be sure that their stored data have not been relocated into unknown geographic regions that may compromise the security of their stored data. Albeshri et al. (2012) combined proof of storage (POS) protocols with distance-bounding protocols to address this problem. However, their scheme involves unnecessary delay when utilising typical POS schemes due to computational overhead at the server side. The aim of this paper is to improve the basic GeoProof protocol by reducing the computation overhead at the server side. We show how this can maintain the same level of security while achieving more accurate geographic assurance.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
The music industry is going through a period of immense change brought about in part by the digital revolution. What is the role of music in the age of computers and the Internet? How has the music industry been transformed by the economic and technological upheavals of recent years, and how is it likely to change in the future? This thoroughly revised and updated new edition provides an international overview of the music industry and its future prospects in the world of global entertainment. Patrik Wikström illuminates the workings of the music industry, and captures the dynamics at work in the production of musical culture between the transnational media conglomerates, the independent music companies and the public. New to this second edition are expanded sections on the structure of the music industry, online business models and the links between social media and music. Engaging and comprehensive, The Music Industry will be a must-read for students and scholars of media and communication studies, cultural studies, popular music, sociology and economics.