890 resultados para BIG-IP
Resumo:
We are sympathetic with Bentley et al’s attempt to encompass the wisdom of crowds in a generative model, but posit that success at using Big Data will include more sensitive measurements, more and more varied sources of information, as well as build from the indirect information available through technology, from ancillary technical features to data from brain-computer interface.
Resumo:
JASMIN is a super-data-cluster designed to provide a high-performance high-volume data analysis environment for the UK environmental science community. Thus far JASMIN has been used primarily by the atmospheric science and earth observation communities, both to support their direct scientific workflow, and the curation of data products in the STFC Centre for Environmental Data Archival (CEDA). Initial JASMIN configuration and first experiences are reported here. Useful improvements in scientific workflow are presented. It is clear from the explosive growth in stored data and use that there was a pent up demand for a suitable big-data analysis environment. This demand is not yet satisfied, in part because JASMIN does not yet have enough compute, the storage is fully allocated, and not all software needs are met. Plans to address these constraints are introduced.
Resumo:
Owing to continuous advances in the computational power of handheld devices like smartphones and tablet computers, it has become possible to perform Big Data operations including modern data mining processes onboard these small devices. A decade of research has proved the feasibility of what has been termed as Mobile Data Mining, with a focus on one mobile device running data mining processes. However, it is not before 2010 until the authors of this book initiated the Pocket Data Mining (PDM) project exploiting the seamless communication among handheld devices performing data analysis tasks that were infeasible until recently. PDM is the process of collaboratively extracting knowledge from distributed data streams in a mobile computing environment. This book provides the reader with an in-depth treatment on this emerging area of research. Details of techniques used and thorough experimental studies are given. More importantly and exclusive to this book, the authors provide detailed practical guide on the deployment of PDM in the mobile environment. An important extension to the basic implementation of PDM dealing with concept drift is also reported. In the era of Big Data, potential applications of paramount importance offered by PDM in a variety of domains including security, business and telemedicine are discussed.
Resumo:
The term 'big data' has recently emerged to describe a range of technological and commercial trends enabling the storage and analysis of huge amounts of customer data, such as that generated by social networks and mobile devices. Much of the commercial promise of big data is in the ability to generate valuable insights from collecting new types and volumes of data in ways that were not previously economically viable. At the same time a number of questions have been raised about the implications for individual privacy. This paper explores key perspectives underlying the emergence of big data, and considers both the opportunities and ethical challenges raised for market research.
Resumo:
This article reflects on a decade of British counterinsurgency operations. Questioning the idea that lessons have been learnt, the paper challenges the assumptions that are being used to frame future strategic choice. Suggesting that defence engagement is primarily focused on optimising overseas interventions while avoiding a deeper strategic reassessment about whether the UK should be undertaking these sorts of activities, the article calls for a proper debate on Britain's national security interests.
Resumo:
As we enter an era of ‘big data’, asset information is becoming a deliverable of complex projects. Prior research suggests digital technologies enable rapid, flexible forms of project organizing. This research analyses practices of managing change in Airbus, CERN and Crossrail, through desk-based review, interviews, visits and a cross-case workshop. These organizations deliver complex projects, rely on digital technologies to manage large data-sets; and use configuration management, a systems engineering approach with mid-20th century origins, to establish and maintain integrity. In them, configuration management has become more, rather than less, important. Asset information is structured, with change managed through digital systems, using relatively hierarchical, asynchronous and sequential processes. The paper contributes by uncovering limits to flexibility in complex projects where integrity is important. Challenges of managing change are discussed, considering the evolving nature of configuration management; potential use of analytics on complex projects; and implications for research and practice.
Resumo:
Pervasive healthcare aims to deliver deinstitutionalised healthcare services to patients anytime and anywhere. Pervasive healthcare involves remote data collection through mobile devices and sensor network which the data is usually in large volume, varied formats and high frequency. The nature of big data such as volume, variety, velocity and veracity, together with its analytical capabilities com-plements the delivery of pervasive healthcare. However, there is limited research in intertwining these two domains. Most research focus mainly on the technical context of big data application in the healthcare sector. Little attention has been paid to a strategic role of big data which impacts the quality of healthcare services provision at the organisational level. Therefore, this paper delivers a conceptual view of big data architecture for pervasive healthcare via an intensive literature review to address the aforementioned research problems. This paper provides three major contributions: 1) identifies the research themes of big data and pervasive healthcare, 2) establishes the relationship between research themes, which later composes the big data architecture for pervasive healthcare, and 3) sheds a light on future research, such as semiosis and sense-making, and enables practitioners to implement big data in the pervasive healthcare through the proposed architecture.
Resumo:
Widespread commercial use of the internet has significantly increased the volume and scope of data being collected by organisations. ‘Big data’ has emerged as a term to encapsulate both the technical and commercial aspects of this growing data collection activity. To date, much of the discussion of big data has centred upon its transformational potential for innovation and efficiency, yet there has been less reflection on its wider implications beyond commercial value creation. This paper builds upon normal accident theory (NAT) to analyse the broader ethical implications of big data. It argues that the strategies behind big data require organisational systems that leave them vulnerable to normal accidents, that is to say some form of accident or disaster that is both unanticipated and inevitable. Whilst NAT has previously focused on the consequences of physical accidents, this paper suggests a new form of system accident that we label data accidents. These have distinct, less tangible and more complex characteristics and raise significant questions over the role of individual privacy in a ‘data society’. The paper concludes by considering the ways in which the risks of such data accidents might be managed or mitigated.
Resumo:
The size and complexity of data sets generated within ecosystem-level programmes merits their capture, curation, storage and analysis, synthesis and visualisation using Big Data approaches. This review looks at previous attempts to organise and analyse such data through the International Biological Programme and draws on the mistakes made and the lessons learned for effective Big Data approaches to current Research Councils United Kingdom (RCUK) ecosystem-level programmes, using Biodiversity and Ecosystem Service Sustainability (BESS) and Environmental Virtual Observatory Pilot (EVOp) as exemplars. The challenges raised by such data are identified, explored and suggestions are made for the two major issues of extending analyses across different spatio-temporal scales and for the effective integration of quantitative and qualitative data.
Resumo:
The General Election for the 56th United Kingdom Parliament was held on 7 May 2015. Tweets related to UK politics, not only those with the specific hashtag ”#GE2015”, have been collected in the period between March 1 and May 31, 2015. The resulting dataset contains over 28 million tweets for a total of 118 GB in uncompressed format or 15 GB in compressed format. This study describes the method that was used to collect the tweets and presents some analysis, including a political sentiment index, and outlines interesting research directions on Big Social Data based on Twitter microblogging.
Resumo:
Many institutions worldwide have developed ocean reanalyses systems (ORAs) utilizing a variety of ocean models and assimilation techniques. However, the quality of salinity reanalyses arising from the various ORAs has not yet been comprehensively assessed. In this study, we assess the upper ocean salinity content (depth-averaged over 0–700 m) from 14 ORAs and 3 objective ocean analysis systems (OOAs) as part of the Ocean Reanalyses Intercomparison Project. Our results show that the best agreement between estimates of salinity from different ORAs is obtained in the tropical Pacific, likely due to relatively abundant atmospheric and oceanic observations in this region. The largest disagreement in salinity reanalyses is in the Southern Ocean along the Antarctic circumpolar current as a consequence of the sparseness of both atmospheric and oceanic observations in this region. The West Pacific warm pool is the largest region where the signal to noise ratio of reanalysed salinity anomalies is >1. Therefore, the current salinity reanalyses in the tropical Pacific Ocean may be more reliable than those in the Southern Ocean and regions along the western boundary currents. Moreover, we found that the assimilation of salinity in ocean regions with relatively strong ocean fronts is still a common problem as seen in most ORAs. The impact of the Argo data on the salinity reanalyses is visible, especially within the upper 500m, where the interannual variability is large. The increasing trend in global-averaged salinity anomalies can only be found within the top 0–300m layer, but with quite large diversity among different ORAs. Beneath the 300m depth, the global-averaged salinity anomalies from most ORAs switch their trends from a slightly growing trend before 2002 to a decreasing trend after 2002. The rapid switch in the trend is most likely an artefact of the dramatic change in the observing system due to the implementation of Argo.
Resumo:
Uncertainty in ocean analysis methods and deficiencies in the observing system are major obstacles for the reliable reconstruction of the past ocean climate. The variety of existing ocean reanalyses is exploited in a multi-reanalysis ensemble to improve the ocean state estimation and to gauge uncertainty levels. The ensemble-based analysis of signal-to-noise ratio allows the identification of ocean characteristics for which the estimation is robust (such as tropical mixed-layer-depth, upper ocean heat content), and where large uncertainty exists (deep ocean, Southern Ocean, sea ice thickness, salinity), providing guidance for future enhancement of the observing and data assimilation systems.
Resumo:
Ocean–sea ice reanalyses are crucial for assessing the variability and recent trends in the Arctic sea ice cover. This is especially true for sea ice volume, as long-term and large scale sea ice thickness observations are inexistent. Results from the Ocean ReAnalyses Intercomparison Project (ORA-IP) are presented, with a focus on Arctic sea ice fields reconstructed by state-of-the-art global ocean reanalyses. Differences between the various reanalyses are explored in terms of the effects of data assimilation, model physics and atmospheric forcing on properties of the sea ice cover, including concentration, thickness, velocity and snow. Amongst the 14 reanalyses studied here, 9 assimilate sea ice concentration, and none assimilate sea ice thickness data. The comparison reveals an overall agreement in the reconstructed concentration fields, mainly because of the constraints in surface temperature imposed by direct assimilation of ocean observations, prescribed or assimilated atmospheric forcing and assimilation of sea ice concentration. However, some spread still exists amongst the reanalyses, due to a variety of factors. In particular, a large spread in sea ice thickness is found within the ensemble of reanalyses, partially caused by the biases inherited from their sea ice model components. Biases are also affected by the assimilation of sea ice concentration and the treatment of sea ice thickness in the data assimilation process. An important outcome of this study is that the spatial distribution of ice volume varies widely between products, with no reanalysis standing out as clearly superior as compared to altimetry estimates. The ice thickness from systems without assimilation of sea ice concentration is not worse than that from systems constrained with sea ice observations. An evaluation of the sea ice velocity fields reveals that ice drifts too fast in most systems. As an ensemble, the ORA-IP reanalyses capture trends in Arctic sea ice area and extent relatively well. However, the ensemble can not be used to get a robust estimate of recent trends in the Arctic sea ice volume. Biases in the reanalyses certainly impact the simulated air–sea fluxes in the polar regions, and questions the suitability of current sea ice reanalyses to initialize seasonal forecasts.