781 resultados para big data storage
Resumo:
This dissertation offers a critical international political economy (IPE) analysis of the ways in which consumer information has been governed throughout the formal history of consumer finance (1840 – present). Drawing primarily on the United States, this project problematizes the notion of consumer financial big data as a ‘new era’ by tracing its roots historically from late nineteenth century through to the present. Using a qualitative case study approach, this project applies a unique theoretical framework to three instances of governance in consumer credit big data. Throughout, the historically specific means used to govern consumer credit data are rooted in dominant ideas, institutions and material factors.
Resumo:
Advocates of Big Data assert that we are in the midst of an epistemological revolution, promising the displacement of the modernist methodological hegemony of causal analysis and theory generation. It is alleged that the growing ‘deluge’ of digitally generated data, and the development of computational algorithms to analyse them, has enabled new inductive ways of accessing everyday relational interactions through their ‘datafication’. This paper critically engages with these discourses of Big Data and complexity, particularly as they operate in the discipline of International Relations, where it is alleged that Big Data approaches have the potential for developing self-governing societal capacities for resilience and adaptation through the real-time reflexive awareness and management of risks and problems as they arise. The epistemological and ontological assumptions underpinning Big Data are then analysed to suggest that critical and posthumanist approaches have come of age through these discourses, enabling process-based and relational understandings to be translated into policy and governance practices. The paper thus raises some questions for the development of critical approaches to new posthuman forms of governance and knowledge production.
How the World Learned to Stop Worrying and Love Failure: Big Data, Resilience and Emergent Causality
Resumo:
In modernity, failure was the discourse of critique, today, it is increasingly the discourse of power: failure has changed its allegiances. Over the last two decades, failure has been enfolded into discourses of power, facilitating the development of new policy approaches. Foremost among governing approaches that seek to include and to govern through failure is that of resilience. This article seeks to reflect upon how the understanding of failure has become transformed in this process, particularly linking this transformation to the radical appreciation of contingency and of the limits to instrumental cause-and-effect approaches to rule. Whereas modernity was shaped by a contestation over failure as an epistemological boundary, under conditions of contingency and complexity there appears to be a new consensus on failure as an ontological necessity. This problematic ‘ontological turn’ is illustrated using examples of changing approaches to risks, especially anthropogenic understandings of environmental threats, formerly seen as ‘natural’.
Resumo:
This paper synthesizes and discusses the spatial and temporal patterns of archaeological sites in Ireland, spanning the Neolithic period and the Bronze Age transition (4300–1900 cal BC), in order to explore the timing and implications of the main changes that occurred in the archaeological record of that period. Large amounts of new data are sourced from unpublished developer-led excavations and combined with national archives, published excavations and online databases. Bayesian radiocarbon models and context- and sample-sensitive summed radiocarbon probabilities are used to examine the dataset. The study captures the scale and timing of the initial expansion of Early Neolithic settlement and the ensuing attenuation of all such activity—an apparent boom-and-bust cycle. The Late Neolithic and Chalcolithic periods are characterised by a resurgence and diversification of activity. Contextualisation and spatial analysis of radiocarbon data reveals finer-scale patterning than is usually possible with summed-probability approaches: the boom-and-bust models of prehistoric populations may, in fact, be a misinterpretation of more subtle demographic changes occurring at the same time as cultural change and attendant differences in the archaeological record.
Resumo:
Much has been written about Big Data from a technical, economical, juridical and ethical perspective. Still, very little empirical and comparative data is available on how Big Data is approached and regulated in Europe and beyond. This contribution makes a first effort to fill that gap by presenting the reactions to a survey on Big Data from the Data Protection Authorities of fourteen European countries and a comparative legal research of eleven countries. This contribution presents those results, addressing 10 challenges for the regulation of Big Data.
Resumo:
Market research is often conducted through conventional methods such as surveys, focus groups and interviews. But the drawbacks of these methods are that they can be costly and timeconsuming. This study develops a new method, based on a combination of standard techniques like sentiment analysis and normalisation, to conduct market research in a manner that is free and quick. The method can be used in many application-areas, but this study focuses mainly on the veganism market to identify vegan food preferences in the form of a profile. Several food words are identified, along with their distribution between positive and negative sentiments in the profile. Surprisingly, non-vegan foods such as cheese, cake, milk, pizza and chicken dominate the profile, indicating that there is a significant market for vegan-suitable alternatives for such foods. Meanwhile, vegan-suitable foods such as coconut, potato, blueberries, kale and tofu also make strong appearances in the profile. Validation is performed by using the method on Volkswagen vehicle data to identify positive and negative sentiment across five car models. Some results were found to be consistent with sales figures and expert reviews, while others were inconsistent. The reliability of the method is therefore questionable, so the results should be used with caution.
Resumo:
The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.
Resumo:
Hotel chains have access to a treasure trove of “big data” on individual hotels’ monthly electricity and water consumption. Benchmarked comparisons of hotels within a specific chain create the opportunity to cost-effectively improve the environmental performance of specific hotels. This paper describes a simple approach for using such data to achieve the joint goals of reducing operating expenditure and achieving broad sustainability goals. In recent years, energy economists have used such “big data” to generate insights about the energy consumption of the residential, commercial, and industrial sectors. Lessons from these studies are directly applicable for the hotel sector. A hotel’s administrative data provide a “laboratory” for conducting random control trials to establish what works in enhancing hotel energy efficiency.
Resumo:
The mainstay of Big Data is prediction in that it allows practitioners, researchers, and policy analysts to predict trends based upon the analysis of large and varied sources of data. These can range from changing social and political opinions, patterns in crimes, and consumer behaviour. Big Data has therefore shifted the criterion of success in science from causal explanations to predictive modelling and simulation. The 19th-century science sought to capture phenomena and seek to show the appearance of it through causal mechanisms while 20th-century science attempted to save the appearance and relinquish causal explanations. Now 21st-century science in the form of Big Data is concerned with the prediction of appearances and nothing more. However, this pulls social science back in the direction of a more rule- or law-governed reality model of science and away from a consideration of the internal nature of rules in relation to various practices. In effect Big Data offers us no more than a world of surface appearance and in doing so it makes disappear any context-specific conceptual sensitivity.
Resumo:
Relaatiotietokannat ovat olleet vallitseva suunta suurissa tietokantajärjestelmissä jo 80-luvulta lähtien. Viimeisen vuosikymmenen aikana lähes kaikki teollinen ja henkilökohtainen tiedonvaihto on siirtynyt sähköiseen maailmaan. Tämä on aiheuttanut valtaisan kasvun datamäärissä. Sama kasvu jatkuu edelleen eksponentiaalisesti. Samalla ei-relaatiotietokannat eli NoSQL-tietokannat ovat nousseet huomattavaan asemaan. Monet organisaatiot käsittelevät suuria määriä järjestämätöntä dataa, jolloin perinteisen relaatiotietokannan käyttö yksin ei välttämättä ole paras, tai edes riittävä vaihtoehto. Web 2.0 -termin takana oleva internet-kulttuurin muutos tukee mukautuvampia ja skaalautuvia NoSQL-järjestelmiä. Internetin käyttäjät, erityisesti sosiaalisessa mediassa tuottavat valtavia määriä järjestymätöntä dataa. Kerättävä tieto ei ole enää tietyn mallin mukaan muotoiltua, vaan yksittäiseen tietueeseen saattaa liittyä esimerkiksi kuvia, videoita, viittauksia muiden käyttäjien luomiin instansseihin tai osoitetietoja. Tässä tutkielmassa käsitellään NoSQL-järjestelmien rakennetta sekä asemaa erityisesti suurissa tietojärjestelmissä ja vertaillaan niiden hyötyjä ja haittoja relaatiotietokantojen suhteen.
Resumo:
The speed with which data has moved from being scarce, expensive and valuable, thus justifying detailed and careful verification and analysis to a situation where the streams of detailed data are almost too large to handle has caused a series of shifts to occur. Legal systems already have severe problems keeping up with, or even in touch with, the rate at which unexpected outcomes flow from information technology. The capacity to harness massive quantities of existing data has driven Big Data applications until recently. Now the data flows in real time are rising swiftly, become more invasive and offer monitoring potential that is eagerly sought by commerce and government alike. The ambiguities as to who own this often quite remarkably intrusive personal data need to be resolved – and rapidly - but are likely to encounter rising resistance from industrial and commercial bodies who see this data flow as ‘theirs’. There have been many changes in ICT that has led to stresses in the resolution of the conflicts between IP exploiters and their customers, but this one is of a different scale due to the wide potential for individual customisation of pricing, identification and the rising commercial value of integrated streams of diverse personal data. A new reconciliation between the parties involved is needed. New business models, and a shift in the current confusions over who owns what data into alignments that are in better accord with the community expectations. After all they are the customers, and the emergence of information monopolies needs to be balanced by appropriate consumer/subject rights. This will be a difficult discussion, but one that is needed to realise the great benefits to all that are clearly available if these issues can be positively resolved. The customers need to make these data flow contestable in some form. These Big data flows are only going to grow and become ever more instructive. A better balance is necessary, For the first time these changes are directly affecting governance of democracies, as the very effective micro targeting tools deployed in recent elections have shown. Yet the data gathered is not available to the subjects. This is not a survivable social model. The Private Data Commons needs our help. Businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons. This Web extra is the audio part of a video in which author Marcus Wigan expands on his article "Big Data's Big Unintended Consequences" and discusses how businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons.