1000 resultados para Data Archives
Resumo:
History as a discipline has been accused of being a-theoretical. Business historians working at business schools, however, need to better explicate their historical methodology, not theory, in order to communicate the value of archival research to social scientists, and to train future doctoral students outside history departments. This paper seeks to outline an important aspect of historical methodology, which is data collection from archives. In this area, postcolonialism and archival ethnography have made significant methodological contributions not just for non-Western history, as it has emphasized the importance of considering how archives were created, and how one can legitimately use them despite their limitations. I argue that these approaches offer new insights into the particularities of researching business archives.
Resumo:
A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-data records. Here we describe the data compiled for the validation of the ocean-colour products from the ESA Ocean Colour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD, MERMAID, AMT, ICES, HOT, GeP&CO), span between 1997 and 2012, and have a global distribution. Observations of the following variables were compiled: spectral remote-sensing reflectances, concentrations of chlorophyll a, spectral inherent optical properties and spectral diffuse attenuation coefficients. The data were from multi-project archives acquired via the open internet services or from individual projects, acquired directly from data providers. Methodologies were implemented for homogenisation, quality control and merging of all data. No changes were made to the original data, other than averaging of observations that were close in time and space, elimination of some points after quality control and conversion to a standard format. The final result is a merged table designed for validation of satellite-derived ocean-colour products and available in text format. Metadata of each in situ measurement (original source, cruise or experiment, principal investigator) were preserved throughout the work and made available in the final table. Using all the data in a validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. By making available the metadata, it is also possible to analyse each set of data separately.
Resumo:
This datafile presents chemical and physical as well as age dating information from the Store Mosse peat bog in southern Sweden. This record dates back to 8900 cal yr BP. The aim of the research was to reconstruct mineral dust deposition over time. As such we have only presented the lithogenic element data (Al, Ga, Rb, Sc, Ti, Y, Zr, Th and the REE) as the sample preparation method was tailored to these. This data is supported by parameters describing the deposit including bulk density, humification, ash content and net peat accumulation rates.
Resumo:
Compilation of figure recipes for all figures of Chapter 5 of IPCC Working Group I, Fifth Assessment Report. In addition to figure captions, figure recipes are supposed to serve as detailed figure creation info. If not publicly available elsewhere, processed data underlying the respective figures are also provided here.
Resumo:
We describe the contemporary hydrography of the pan-Arctic land area draining into the Arctic Ocean, northern Bering Sea, and Hudson Bay on the basis of observational records of river discharge and computed runoff. The Regional Arctic Hydrographic Network data set, R-ArcticNET, is presented, which is based on 3754 recording stations drawn from Russian, Canadian, European, and U.S. archives. R-ArcticNET represents the single largest data compendium of observed discharge in the Arctic. Approximately 73% of the nonglaciated area of the pan-Arctic is monitored by at least one river discharge gage giving a mean gage density of 168 gages per 106 km2. Average annual runoff is 212 mm yr?1 with approximately 60% of the river discharge occurring from April to July. Gridded runoff surfaces are generated for the gaged portion of the pan-Arctic region to investigate global change signals. Siberia and Alaska showed increases in winter runoff during the 1980s relative to the 1960s and 1970s during annual and seasonal periods. These changes are consistent with observations of change in the climatology of the region. Western Canada experienced decreased spring and summer runoff.
Resumo:
Multi-frequency eddy current measurements are employed in estimating pressure tube (PT) to calandria tube (CT) gap in CANDU fuel channels, a critical inspection activity required to ensure fitness for service of fuel channels. In this thesis, a comprehensive characterization of eddy current gap data is laid out, in order to extract further information on fuel channel condition, and to identify generalized applications for multi-frequency eddy current data. A surface profiling technique, generalizable to multiple probe and conductive material configurations has been developed. This technique has allowed for identification of various pressure tube artefacts, has been independently validated (using ultrasonic measurements), and has been deployed and commissioned at Ontario Power Generation. Dodd and Deeds solutions to the electromagnetic boundary value problem associated with the PT to CT gap probe configuration were experimentally validated for amplitude response to changes in gap. Using the validated Dodd and Deeds solutions, principal components analysis (PCA) has been employed to identify independence and redundancies in multi-frequency eddy current data. This has allowed for an enhanced visualization of factors affecting gap measurement. Results of the PCA of simulation data are consistent with the skin depth equation, and are validated against PCA of physical experiments. Finally, compressed data acquisition has been realized, allowing faster data acquisition for multi-frequency eddy current systems with hardware limitations, and is generalizable to other applications where real time acquisition of large data sets is prohibitive.
Resumo:
Smart cities, cities that are supported by an extensive digital infrastructure of sensors, databases and intelligent applications, have become a major area of academic, governmental and public interest. Simultaneously, there has been a growing interest in open data, the unrestricted use of organizational data for public viewing and use. Drawing on Science and Technology Studies (STS), Urban Studies and Political Economy, this thesis examines how digital processes, open data and the physical world can be combined in smart city development, through the qualitative interview-based case study of a Southern Ontario Municipality, Anytown. The thesis asks what are the challenges associated with smart city development and open data proliferation, is open data complimentary to smart urban development; and how is expertise constructed in these fields? The thesis concludes that smart city development in Anytown is a complex process, involving a variety of visions, programs and components. Although smart city and open data initiatives exist in Anytown, and some are even overlapping and complementary, smart city development is in its infancy. However, expert informants remained optimistic, faithful to a technologically sublime vision of what a smart city would bring. The thesis also questions the notion of expertise within the context of smart city and open data projects, concluding that assertions of expertise need to be treated with caution and scepticism when considering how knowledge is received, generated, interpreted and circulates, within organizations.
Resumo:
This dissertation offers a critical international political economy (IPE) analysis of the ways in which consumer information has been governed throughout the formal history of consumer finance (1840 – present). Drawing primarily on the United States, this project problematizes the notion of consumer financial big data as a ‘new era’ by tracing its roots historically from late nineteenth century through to the present. Using a qualitative case study approach, this project applies a unique theoretical framework to three instances of governance in consumer credit big data. Throughout, the historically specific means used to govern consumer credit data are rooted in dominant ideas, institutions and material factors.
Resumo:
This paper synthesizes and discusses the spatial and temporal patterns of archaeological sites in Ireland, spanning the Neolithic period and the Bronze Age transition (4300–1900 cal BC), in order to explore the timing and implications of the main changes that occurred in the archaeological record of that period. Large amounts of new data are sourced from unpublished developer-led excavations and combined with national archives, published excavations and online databases. Bayesian radiocarbon models and context- and sample-sensitive summed radiocarbon probabilities are used to examine the dataset. The study captures the scale and timing of the initial expansion of Early Neolithic settlement and the ensuing attenuation of all such activity—an apparent boom-and-bust cycle. The Late Neolithic and Chalcolithic periods are characterised by a resurgence and diversification of activity. Contextualisation and spatial analysis of radiocarbon data reveals finer-scale patterning than is usually possible with summed-probability approaches: the boom-and-bust models of prehistoric populations may, in fact, be a misinterpretation of more subtle demographic changes occurring at the same time as cultural change and attendant differences in the archaeological record.
Resumo:
Community-driven Question Answering (CQA) systems that crowdsource experiential information in the form of questions and answers and have accumulated valuable reusable knowledge. Clustering of QA datasets from CQA systems provides a means of organizing the content to ease tasks such as manual curation and tagging. In this paper, we present a clustering method that exploits the two-part question-answer structure in QA datasets to improve clustering quality. Our method, {\it MixKMeans}, composes question and answer space similarities in a way that the space on which the match is higher is allowed to dominate. This construction is motivated by our observation that semantic similarity between question-answer data (QAs) could get localized in either space. We empirically evaluate our method on a variety of real-world labeled datasets. Our results indicate that our method significantly outperforms state-of-the-art clustering methods for the task of clustering question-answer archives.
Resumo:
International audience
Resumo:
A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-data records. Here we describe the data compiled for the validation of the ocean-colour products from the ESA Ocean Colour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD, MERMAID, AMT, ICES, HOT, GeP&CO), span between 1997 and 2012, and have a global distribution. Observations of the following variables were compiled: spectral remote-sensing reflectances, concentrations of chlorophyll a, spectral inherent optical properties and spectral diffuse attenuation coefficients. The data were from multi-project archives acquired via the open internet services or from individual projects, acquired directly from data providers. Methodologies were implemented for homogenisation, quality control and merging of all data. No changes were made to the original data, other than averaging of observations that were close in time and space, elimination of some points after quality control and conversion to a standard format. The final result is a merged table designed for validation of satellite-derived ocean-colour products and available in text format. Metadata of each in situ measurement (original source, cruise or experiment, principal investigator) were preserved throughout the work and made available in the final table. Using all the data in a validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. By making available the metadata, it is also possible to analyse each set of data separately. The compiled data are available at doi: 10.1594/PANGAEA.854832 (Valente et al., 2015).
Resumo:
International audience
Resumo:
O presente trabalho utilizou métodos multivariados e matemáticos para integrar dados químicos e ecotoxicológicos obtidos para o Sistema Estuarino de Santos e para a região próxima à zona de lançamento do emissário submarino de Santos, com a finalidade de estabelecer com maior exatidão os riscos ambientais, e assim identificar áreas prioritárias e orientar programas de controle e políticas públicas. Para ambos os conjuntos de dados, as violações de valores numéricos de qualidade de sedimento tenderam a estar associadas com a ocorrência de toxicidade. Para o estuário, essa tendência foi corroborada pelas correlações entre a toxicidade e as concentrações de HPAs e Cu, enquanto para a região do emissário, pela correlação entre toxicidade e conteúdo de mercúrio no sedimento. Valores normalizados em relação às medias foram calculados para cada amostra, permitindo classificá-las de acordo com a toxicidade e a contaminação. As análises de agrupamento confirmaram os resultados das classificações. Para os dados de sistema estuarino, houve a separação das amostras em três categorias: as estações SSV-2, SSV-3 e SSV-4 encontram-se sob maior risco, seguidas da estação SSV-6. As estações SSV-1 e SSV-5 demonstraram melhores condições. Já em relação ao emissário, as amostras 1 e 2 apresentaram melhores condições, enquanto a estação 5 pareceu apresentar um maior risco, seguida das estações 3 e 4 que tiveram apenas alguns indícios de alteração.
Resumo:
In recent years, the 380V DC and 48V DC distribution systems have been extensively studied for the latest data centers. It is widely believed that the 380V DC system is a very promising candidate because of its lower cable cost compared to the 48V DC system. However, previous studies have not adequately addressed the low reliability issue with the 380V DC systems due to large amount of series connected batteries. In this thesis, a quantitative comparison for the two systems has been presented in terms of efficiency, reliability and cost. A new multi-port DC UPS with both high voltage output and low voltage output is proposed. When utility ac is available, it delivers power to the load through its high voltage output and charges the battery through its low voltage output. When utility ac is off, it boosts the low battery voltage and delivers power to the load form the battery. Thus, the advantages of both systems are combined and the disadvantages of them are avoided. High efficiency is also achieved as only one converter is working in either situation. Details about the design and analysis of the new UPS are presented. For the main AC-DC part of the new UPS, a novel bridgeless three-level single-stage AC-DC converter is proposed. It eliminates the auxiliary circuit for balancing the capacitor voltages and the two bridge rectifier diodes in previous topology. Zero voltage switching, high power factor, and low component stresses are achieved with this topology. Compared to previous topologies, the proposed converter has a lower cost, higher reliability, and higher efficiency. The steady state operation of the converter is analyzed and a decoupled model is proposed for the converter. For the battery side converter as a part of the new UPS, a ZVS bidirectional DC-DC converter based on self-sustained oscillation control is proposed. Frequency control is used to ensure the ZVS operation of all four switches and phase shift control is employed to regulate the converter output power. Detailed analysis of the steady state operation and design of the converter are presented. Theoretical, simulation, and experimental results are presented to verify the effectiveness of the proposed concepts.