868 resultados para Elasticità Coordinazione Cloud Respect SYBL
Resumo:
Cloud Computing, based on early virtual computer concepts and technologies, is now itself a maturing technology in the marketplace and it has revolutionized the IT industry, being the powerful platform that many businesses are choosing to migrate their in-premises IT services onto. Cloud solution has the potential to reduce the capital and operational expenses associated with deploying IT services on their own. In this study, we have implemented our own private cloud solution, infrastructure as a service (IaaS), using the OpenStack platform with high availability and a dynamic resource allocation mechanism. Besides, we have hosted unified communication as a service (UCaaS) in the underlying IaaS and successfully tested voice over IP (VoIP), video conferencing, voice mail and instant messaging (IM) with clients located at the remote site. The proposed solution has been developed in order to give advice to bussinesses that want to build their own cloud environment, IaaS and host cloud services and applicatons in the cloud. This paper also aims at providing an alternate option for proprietary cloud solutions for service providers to consider.
Resumo:
Young novice drivers are at considerable risk of injury on the road. Their behaviour appears vulnerable to the social influence of their parents and friends. The nature and mechanisms of parent and peer influence on young novice driver (16–25 years) behaviour was explored via small group interviews (n = 21) and two surveys (n1 = 1170, n2 = 390) to inform more effective young driver countermeasures. Parental and peer influence occurred in preLicence, Learner, and Provisional (intermediate) periods. Pre-Licence and unsupervised Learner drivers reported their parents were less likely to punish risky driving (e.g., speeding). These drivers were more likely to imitate their parents and reported their parents were also risky drivers. Young novice drivers who experienced or expected social punishments from peers, including ‘being told off’ for risky driving, reported less riskiness. Conversely drivers who experienced or expected social rewards such as being ‘cheered on’ by friends – who were also more risky drivers – reported more risky driving including crashes and offences. Interventions enhancing positive influence and curtailing negative influence may improve road safety outcomes not only for young novice drivers, but for all persons who share the road with them. Parent-specific interventions warrant further development and evaluation including: modelling safe driving behaviour by parents; active monitoring of driving during novice licensure; and sharing the family vehicle during the intermediate phase. Peer-targeted interventions including modelling of safe driving behaviour and attitudes; minimisation of social reinforcement and promotion of social sanctions for risky driving also need further development and evaluation.
Resumo:
For the past few years, research works on the topic of secure outsourcing of cryptographic computations has drawn significant attention from academics in security and cryptology disciplines as well as information security practitioners. One main reason for this interest is their application for resource constrained devices such as RFID tags. While there has been significant progress in this domain since Hohenberger and Lysyanskaya have provided formal security notions for secure computation delegation, there are some interesting challenges that need to be solved that can be useful towards a wider deployment of cryptographic protocols that enable secure outsourcing of cryptographic computations. This position paper brings out these challenging problems with RFID technology as the use case together with our ideas, where applicable, that can provide a direction towards solving the problems.
Resumo:
Aim: To quantify the consequences of major threats to biodiversity, such as climate and land-use change, it is important to use explicit measures of species persistence, such as extinction risk. The extinction risk of metapopulations can be approximated through simple models, providing a regional snapshot of the extinction probability of a species. We evaluated the extinction risk of three species under different climate change scenarios in three different regions of the Mexican cloud forest, a highly fragmented habitat that is particularly vulnerable to climate change. Location: Cloud forests in Mexico. Methods: Using Maxent, we estimated the potential distribution of cloud forest for three different time horizons (2030, 2050 and 2080) and their overlap with protected areas. Then, we calculated the extinction risk of three contrasting vertebrate species for two scenarios: (1) climate change only (all suitable areas of cloud forest through time) and (2) climate and land-use change (only suitable areas within a currently protected area), using an explicit patch-occupancy approximation model and calculating the joint probability of all populations becoming extinct when the number of remaining patches was less than five. Results: Our results show that the extent of environmentally suitable areas for cloud forest in Mexico will sharply decline in the next 70 years. We discovered that if all habitat outside protected areas is transformed, then only species with small area requirements are likely to persist. With habitat loss through climate change only, high dispersal rates are sufficient for persistence, but this requires protection of all remaining cloud forest areas. Main conclusions: Even if high dispersal rates mitigate the extinction risk of species due to climate change, the synergistic impacts of changing climate and land use further threaten the persistence of species with higher area requirements. Our approach for assessing the impacts of threats on biodiversity is particularly useful when there is little time or data for detailed population viability analyses. © 2013 John Wiley & Sons Ltd.
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.
Resumo:
In contrast to single robotic agent, multi-robot systems are highly dependent on reliable communication. Robots have to synchronize tasks or to share poses and sensor readings with other agents, especially for co-operative mapping task where local sensor readings are incorporated into a global map. The drawback of existing communication frameworks is that most are based on a central component which has to be constantly within reach. Additionally, they do not prevent data loss between robots if a failure occurs in the communication link. During a distributed mapping task, loss of data is critical because it will corrupt the global map. In this work, we propose a cloud-based publish/subscribe mechanism which enables reliable communication between agents during a cooperative mission using the Data Distribution Service (DDS) as a transport layer. The usability of our approach is verified by several experiments taking into account complete temporary communication loss.
Resumo:
Cloud computing has significantly impacted a broad range of industries, but these technologies and services have been absorbed throughout the marketplace unevenly. Some industries have moved aggressively towards cloud computing, while others have moved much more slowly. For the most part, the energy sector has approached cloud computing in a measured and cautious way, with progress often in the form of private cloud solutions rather than public ones, or hybridized information technology systems that combine cloud and existing non-cloud architectures. By moving towards cloud computing in a very slow and tentative way, however, the energy industry may prevent itself from reaping the full benefit that a more complete migration to the public cloud has brought about in several other industries. This short communication is accordingly intended to offer a high-level overview of cloud computing, and to put forward the argument that the energy sector should make a more complete migration to the public cloud in order to unlock the major system-wide efficiencies that cloud computing can provide. Also, assets within the energy sector should be designed with as much modularity and flexibility as possible so that they are not locked out of cloud-friendly options in the future.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
This report describes the Year One Pilot Study processes, and articulates findings from the major project components designed to address these challenges noted above (See Figure 1). Specifically, the pilot study tested the campaign research and development process involving participatory design with young people and sector partners, and the efficacy and practicality of conducting a longitudinal, randomised control trial online with minors, including ways oflinking survey data to campaign data. Each sub-study comprehensively considered the ethical requirements of conducting online research with minors in school settings. The theoretical and methodological framework for measuring campaign engagement and efficacy (Sub-studies 3, 4 and 5) drew on the Model of Goal-Directed Behaviour (MGB) (Perugini & Bagozzi 2001) and Nudge Theory (Thaler & Sunstein, 2008).
Resumo:
Purpose – The purpose of this paper is to investigate the extent of directors breaching the reporting requirements of the Australian Stock Exchange (ASX) and the Corporations Act in Australia. Further, it seeks to assess whether directors in Australia achieve abnormal returns from trades in their own companies. Design/methodology/approach – Using an event study approach on an Australian sample, abnormal returns for a range of situations were estimated. Findings – A total of 13 (seven) per cent of own‐company directors trades do not meet the ASX (Corporations Act) requirement of reporting within five (14) business days. Directors do achieve abnormal returns through trading in shares of their own companies. Ignoring transaction costs, outsiders can achieve abnormal returns by imitating directors' trades. Analysis of returns to directors after they trade but before they announce the trade to the market shows that directors are making small but statistically significant returns that are not available to the market. Analysis of returns to directors subsequent to the ASX reporting requirement up to the day the trade is reported shows that directors are making small but statistically significant returns that should be available to the market. Research limitations/implications – Future research should investigate the linkages between late reporting by directors and disadvantages to outside shareholders and the implementation of internal policies implemented to mitigate insider trading. Practical implications – Market participants should remain vigilant regarding the potential for late/non‐reporting of directors' trades. Originality/value – Uncovering breaches of reporting regulations are particularly important given that directors tend to purchase (sell) shares when the price is low (high), thereby achieving abnormal returns.
Resumo:
Fair Use Week has celebrated the evolution and development of the defence of fair use under copyright law in the United States. As Krista Cox noted, ‘As a flexible doctrine, fair use can adapt to evolving technologies and new situations that may arise, and its long history demonstrates its importance in promoting access to information, future innovation, and creativity.’ While the defence of fair use has flourished in the United States, the adoption of the defence of fair use in other jurisdictions has often been stymied. Professor Peter Jaszi has reflected: ‘We can only wonder (with some bemusement) why some of our most important foreign competitors, like the European Union, haven’t figured out that fair use is, to a great extent, the “secret sauce” of U.S. cultural competitiveness.’ Jurisdictions such as Australia have been at a dismal disadvantage, because they lack the freedoms and flexibilities of the defence of fair use.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.
Resumo:
Purpose – While many studies have predominantly looked at the benefits and risks of cloud computing, little is known whether and to what extent institutional forces play a role in cloud computing adoption. The purpose of this paper is to explore the role of institutional factors in top management team’s (TMT’s) decision to adopt cloud computing services. Design/methodology/approach – A model is developed and tested with data from an Australian survey using the partial least squares modeling technique. Findings – The results suggest that mimetic and coercive pressures influence TMT’s beliefs in the benefits of cloud computing. The results also show that TMT’s beliefs drive TMT’s participation, which in turn affects the intention to increase the adoption of cloud computing solutions. Research limitations/implications – Future studies could incorporate the influences of local actors who might also press for innovation. Practical implications – Given the influence of institutional forces and the plethora of cloud-based solutions on the market, it is recommended that TMTs exercise a high degree of caution when deciding for the types of applications to be outsourced as organizational requirements in terms of performance and security will differ. Originality/value – The paper contributes to the growing empirical literature on cloud computing adoption and offers the institutional framework as an alternative lens with which to interpret cloud-based information technology outsourcing.
Resumo:
This study proposes that the adoption process of complex-wide systems (e.g. cloud ERP) should be observed as multi-stage actions. Two theoretical lenses were utilised for this study with critical adoption factors identified through the theory of planned behaviour and the progression of each adoption factor observed through Ettlie's (1980) multi-stage adoption model. Together with a survey method, this study has employed data gathered from 162 decision-makers of small and medium-sized enterprises (SMEs). Using both linear and non-linear approaches for the data analysis, the study findings have shown that the level of importance for adoption factors changes across different adoption stages.
Resumo:
The climate in the Arctic is changing faster than anywhere else on earth. Poorly understood feedback processes relating to Arctic clouds and aerosol–cloud interactions contribute to a poor understanding of the present changes in the Arctic climate system, and also to a large spread in projections of future climate in the Arctic. The problem is exacerbated by the paucity of research-quality observations in the central Arctic. Improved formulations in climate models require such observations, which can only come from measurements in situ in this difficult-to-reach region with logistically demanding environmental conditions. The Arctic Summer Cloud Ocean Study (ASCOS) was the most extensive central Arctic Ocean expedition with an atmospheric focus during the International Polar Year (IPY) 2007–2008. ASCOS focused on the study of the formation and life cycle of low-level Arctic clouds. ASCOS departed from Longyearbyen on Svalbard on 2 August and returned on 9 September 2008. In transit into and out of the pack ice, four short research stations were undertaken in the Fram Strait: two in open water and two in the marginal ice zone. After traversing the pack ice northward, an ice camp was set up on 12 August at 87°21' N, 01°29' W and remained in operation through 1 September, drifting with the ice. During this time, extensive measurements were taken of atmospheric gas and particle chemistry and physics, mesoscale and boundary-layer meteorology, marine biology and chemistry, and upper ocean physics. ASCOS provides a unique interdisciplinary data set for development and testing of new hypotheses on cloud processes, their interactions with the sea ice and ocean and associated physical, chemical, and biological processes and interactions. For example, the first-ever quantitative observation of bubbles in Arctic leads, combined with the unique discovery of marine organic material, polymer gels with an origin in the ocean, inside cloud droplets suggests the possibility of primary marine organically derived cloud condensation nuclei in Arctic stratocumulus clouds. Direct observations of surface fluxes of aerosols could, however, not explain observed variability in aerosol concentrations, and the balance between local and remote aerosols sources remains open. Lack of cloud condensation nuclei (CCN) was at times a controlling factor in low-level cloud formation, and hence for the impact of clouds on the surface energy budget. ASCOS provided detailed measurements of the surface energy balance from late summer melt into the initial autumn freeze-up, and documented the effects of clouds and storms on the surface energy balance during this transition. In addition to such process-level studies, the unique, independent ASCOS data set can and is being used for validation of satellite retrievals, operational models, and reanalysis data sets.