972 resultados para Microsoft Azure
Resumo:
The Kasparov-World match was initiated by Microsoft with sponsorship from the bank First USA. The concept was that Garry Kasparov as White would play the rest of the world on the Web: one ply would be played per day and the World Team was to vote for its move. The Kasparov-World game was a success from many points of view. It certainly gave thousands the feeling of facing the world’s best player across the board and did much for the future of the game. Described by Kasparov as “phenomenal ... the most complex in chess history”, it is probably a worthy ‘Greatest Game’ candidate. Computer technology has given chess a new mode of play and taken it to new heights: the experiment deserves to be repeated. We look forward to another game and experience of this quality although it will be difficult to surpass the event we have just enjoyed. We salute and thank all those who contributed - sponsors, moderator, coaches, unofficial analysts, organisers, technologists, voters and our new friends.
Resumo:
Goal modelling is a well known rigorous method for analysing problem rationale and developing requirements. Under the pressures typical of time-constrained projects its benefits are not accessible. This is because of the effort and time needed to create the graph and because reading the results can be difficult owing to the effects of crosscutting concerns. Here we introduce an adaptation of KAOS to meet the needs of rapid turn around and clarity. The main aim is to help the stakeholders gain an insight into the larger issues that might be overlooked if they make a premature start into implementation. The method emphasises the use of obstacles, accepts under-refined goals and has new methods for managing crosscutting concerns and strategic decision making. It is expected to be of value to agile as well as traditional processes.
Resumo:
Modern Portfolio Theory (MPT) has been advocated as a more rational approach to the construction of real estate portfolios. The application of MPT can now be achieved with relative ease using the powerful facilities of modern spreadsheet, and does not necessarily need specialist software. This capability is to be found in the use of an add-in Tool now found in several spreadsheets, called an Optimiser or Solver. The value in using this kind of more sophisticated analysis feature of spreadsheets is increasingly difficult to ignore. This paper examines the use of the spreadsheet Optimiser in handling asset allocation problems. Using the Markowitz Mean-Variance approach, the paper introduces the necessary calculations, and shows, by means of an elementary example implemented in Microsoft's Excel, how the Optimiser may be used. Emphasis is placed on understanding the inputs and outputs from the portfolio optimisation process, and the danger of treating the Optimiser as a Black Box is discussed.
Resumo:
Analysis of human behaviour through visual information has been a highly active research topic in the computer vision community. This was previously achieved via images from a conventional camera, but recently depth sensors have made a new type of data available. This survey starts by explaining the advantages of depth imagery, then describes the new sensors that are available to obtain it. In particular, the Microsoft Kinect has made high-resolution real-time depth cheaply available. The main published research on the use of depth imagery for analysing human activity is reviewed. Much of the existing work focuses on body part detection and pose estimation. A growing research area addresses the recognition of human actions. The publicly available datasets that include depth imagery are listed, as are the software libraries that can acquire it from a sensor. This survey concludes by summarising the current state of work on this topic, and pointing out promising future research directions.
Resumo:
Objective: To describe the training undertaken by pharmacists employed in a pharmacist-led information technology-based intervention study to reduce medication errors in primary care (PINCER Trial), evaluate pharmacists’ assessment of the training, and the time implications of undertaking the training. Methods: Six pharmacists received training, which included training on root cause analysis and educational outreach, to enable them to deliver the PINCER Trial intervention. This was evaluated using self-report questionnaires at the end of each training session. The time taken to complete each session was recorded. Data from the evaluation forms were entered onto a Microsoft Excel spreadsheet, independently checked and the summary of results further verified. Frequencies were calculated for responses to the three-point Likert scale questions. Free-text comments from the evaluation forms and pharmacists’ diaries were analysed thematically. Key findings: All six pharmacists received 22 hours of training over five sessions. In four out of the five sessions, the pharmacists who completed an evaluation form (27 out of 30 were completed) stated they were satisfied or very satisfied with the various elements of the training package. Analysis of free-text comments and the pharmacists’ diaries showed that the principles of root cause analysis and educational outreach were viewed as useful tools to help pharmacists conduct pharmaceutical interventions in both the study and other pharmacy roles that they undertook. The opportunity to undertake role play was a valuable part of the training received. Conclusions: Findings presented in this paper suggest that providing the PINCER pharmacists with training in root cause analysis and educational outreach contributed to the successful delivery of PINCER interventions and could potentially be utilised by other pharmacists based in general practice to deliver pharmaceutical interventions to improve patient safety.
The Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions
Resumo:
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Resumo:
Kasparov-World, initiated by Microsoft and also sponsored by First USA, was a novel correspondence game played on the World Wide Web at one ply per day. This was the first time that any group had attempted to form on the Web and then solve shared problems against fixed, short-term deadlines. The first author first became involved in his role as a Web consultant, observing the dynamics and effectiveness of the group. These are fully described, together with observations on the technological contribution and the second author's post-hoc computation of some relevant Endgame Tables.
Resumo:
For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.
Resumo:
Mobile devices can enhance undergraduate research projects and students’ research capabilities. The use of mobile devices such as tablet computers will not automatically make undergraduates better researchers, but their use should make investigations, writing, and publishing more effective and may even save students time. We have explored some of the possibilities of using “tablets” and “smartphones” to aid the research and inquiry process in geography and bioscience fieldwork. We provide two case studies as illustration of how students working in small research groups use mobile devices to gather and analyze primary data in field-based inquiry. Since April 2010, Apple’s iPad has changed the way people behave in the digital world and how they access their music, watch videos, or read their email much as the entrepreneurs Steve Jobs and Jonathan Ive intended. Now with “apps” and “the cloud” and the ubiquitous references to them appearing in the press and on TV, academics’ use of tablets is also having an impact on education and research. In our discussion we will refer to use of smartphones such as the iPhone, iPod, and Android devices under the term “tablet”. Android and Microsoft devices may not offer the same facilities as the iPad/iphone, but many app producers now provide versions for several operating systems. Smartphones are becoming more affordable and ubiquitous (Melhuish and Falloon 2010), but a recent study of undergraduate students (Woodcock et al. 2012, 1) found that many students who own smartphones are “largely unaware of their potential to support learning”. Importantly, however, students were found to be “interested in and open to the potential as they become familiar with the possibilities” (Woodcock et al. 2012). Smartphones and iPads could be better utilized than laptops when conducting research in the field because of their portability (Welsh and France 2012). It is imperative for faculty to provide their students with opportunities to discover and employ the potential uses of mobile devices in their learning. However, it is not only the convenience of the iPad or tablet devices or smartphones we wish to promote, but also a way of thinking and behaving digitally. We essentially suggest that making a tablet the center of research increases the connections between related research activities.
Resumo:
Abstract: A new methodology was created to measure the energy consumption and related green house gas (GHG) emissions of a computer operating system (OS) across different device platforms. The methodology involved the direct power measurement of devices under different activity states. In order to include all aspects of an OS, the methodology included measurements in various OS modes, whilst uniquely, also incorporating measurements when running an array of defined software activities, so as to include OS application management features. The methodology was demonstrated on a laptop and phone that could each run multiple OSs, results confirmed that OS can significantly impact the energy consumption of devices. In particular, the new versions of the Microsoft Windows OS were tested and highlighted significant differences between the OS versions on the same hardware. The developed methodology could enable a greater awareness of energy consumption, during both the software development and software marketing processes.
Resumo:
We have used coalescent analysis of mtDNA cytochrome b (cyt b) sequences to estimate times of divergence of three species of Alouatta-A. caraya, A. belzebul, and A. guariba-which are in close geographic proximity. A. caraya is inferred to have diverged from the A. guariba/A. belzebul clade approximately 3.83 million years ago (MYA), with the later pair diverging approximately 1.55 MYA. These dates are much more recent than previous dates based on molecular-clock methods. In addition, analyses of new sequences from the Atlantic Coastal Forest species A. guariba indicate the presence of two distinct haplogroups corresponding to northern and southern populations with both haplogroups occurring in sympatry within Sao Paulo state. The time of divergence of these two haplogroups is estimated to be 1.2 MYA and so follows quite closely after the divergence of A. guariba and A. belzebul. These more recent dates point to the importance of Pleistocene environmental events as important factors in the diversification of A. belzebul and A. guariba. We discuss the diversification of the three Alouatta species in the context of recent models of climatic change and with regard to recent molecular phylogeographic analyses of other animal groups distributed in Brazil.
Resumo:
Coleodactylus amazonicus, a small leaf-litter diurnal gecko widely distributed in Amazon Basin has been, considered a single species with no significant morphological differences between populations along its range. A recent molecular study, however, detected large genetic differences between populations of central Amazonia and those in the easternmost part of the Amazon Basin, suggesting the presence of taxonomically unrecognised diversity. In this study, DNA sequences of three mitochondrial (165, cytb, and ND4) and two nuclear genes (RAG-1, c-mos) were used to investigate whether the species currently identified as C. amazonicus contains morphologically cryptic species lineages. The present phylogenetic analysis reveals further genetic subdivision including at least five potential species lineages, restricted to northeastern (lineage A), southeastern (lineage B), central-northern (lineage E) and central-southern (lineages C and D) parts of Amazon Basin. All clades are characterized by exclusive groups of alleles for both nuclear genes and highly divergent mitochondrial haplotype clades, with corrected pairwise net sequence divergence between sister lineages ranging from 9.1% to 20.7% for the entire mtDNA dataset. Results of this study suggest that the real diversity of ""C. amazonicus"" has been underestimated due to its apparent cryptic diversification. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The generic identity of Odontophrynus moratoi is controversial since the original description due to the presence of intermediate morphological features between the genera Odontophrynus and Proceratophrys. Herein we performed molecular analyses of three genes (16S, cyt b and Rag-1) and recovered O. moratoi deeply imbedded inside a clade containing only Proceratophrys species, appearing as the sister group of Proceratophrys concavitympanum. Therefore, this study formally transfers the species O. moratoi to the genus Proceratophrys [Proceratophrys moratoi (Jim & Caramaschi 1980) comb. nov].
The genus Coleodactylus (Sphaerodactylinae, Gekkota) revisited: A molecular phylogenetic perspective
Resumo:
Nucleotide sequence data from a mitochondrial gene (16S) and two nuclear genes (c-mos, RAG-1) were used to evaluate the monophyly of the genus Coleodactylus, to provide the first phylogenetic hypothesis of relationships among its species in a cladistic framework, and to estimate the relative timing, of species divergences. Maximum Parsimony, Maximum Likelihood and Bayesian analyses of the combined data sets retrieved Coleodactylus as a monophyletic genus, although weakly Supported. Species were recovered as two genetically and morphological distinct clades, with C. amazonicus populations forming the sister taxon to the meridionalis group (C. brachystoma, C. meridionalis, C. natalensis, and C. septentrionalis). Within this group, C. septentrionalis was placed as the sister taxon to a clade comprising the rest of the species, C. meridionalis was recovered as the sister species to C. brachystoma, and C natalensis was found nested within C. meridionalis. Divergence time estimates based on penalized likelihood and Bayesian dating methods do not Support the previous hypothesis based on the Quaternary rain forest fragmentation model proposed to explain the diversification of the genus. The basal cladogenic event between major lineages of Coleodactylus was estimated to have occurred in the late Cretaceous (72.6 +/- 1.77 Mya), approximately at the same point in time than the other genera of Sphaerodactylinae diverged from each other. Within the meridionalis group, the split between C. septentrionalis and C. brachystoma + C. meridionalis was placed in the Eocene (46.4 +/- 4.22 Mya), and the divergence between C. brachystoma and C. meridionalis was estimated to have occurred in the Oligocene (29.3 +/- 4.33 Mya). Most intraspecific cladogenesis occurred through Miocene to Pliocene, and only for two conspecific samples and for C. natalensis could a Quaternary differentiation be assumed (1.9 +/- 1.3 Mya). (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
This paper presents an approach for assisting low-literacy readers in accessing Web online information. The oEducational FACILITAo tool is a Web content adaptation tool that provides innovative features and follows more intuitive interaction models regarding accessibility concerns. Especially, we propose an interaction model and a Web application that explore the natural language processing tasks of lexical elaboration and named entity labeling for improving Web accessibility. We report on the results obtained from a pilot study on usability analysis carried out with low-literacy users. The preliminary results show that oEducational FACILITAo improves the comprehension of text elements, although the assistance mechanisms might also confuse users when word sense ambiguity is introduced, by gathering, for a complex word, a list of synonyms with multiple meanings. This fact evokes a future solution in which the correct sense for a complex word in a sentence is identified, solving this pervasive characteristic of natural languages. The pilot study also identified that experienced computer users find the tool to be more useful than novice computer users do.