47 resultados para HISTORICAL DATA-ANALYSIS
Resumo:
A compositional multivariate approach is used to analyse regional scale soil geochemical data obtained as part of the Tellus Project generated by the Geological Survey Northern Ireland (GSNI). The multi-element total concentration data presented comprise XRF analyses of 6862 rural soil samples collected at 20cm depths on a non-aligned grid at one site per 2 km2. Censored data were imputed using published detection limits. Using these imputed values for 46 elements (including LOI), each soil sample site was assigned to the regional geology map provided by GSNI initially using the dominant lithology for the map polygon. Northern Ireland includes a diversity of geology representing a stratigraphic record from the Mesoproterozoic, up to and including the Palaeogene. However, the advance of ice sheets and their meltwaters over the last 100,000 years has left at least 80% of the bedrock covered by superficial deposits, including glacial till and post-glacial alluvium and peat. The question is to what extent the soil geochemistry reflects the underlying geology or superficial deposits. To address this, the geochemical data were transformed using centered log ratios (clr) to observe the requirements of compositional data analysis and avoid closure issues. Following this, compositional multivariate techniques including compositional Principal Component Analysis (PCA) and minimum/maximum autocorrelation factor (MAF) analysis method were used to determine the influence of underlying geology on the soil geochemistry signature. PCA showed that 72% of the variation was determined by the first four principal components (PC’s) implying “significant” structure in the data. Analysis of variance showed that only 10 PC’s were necessary to classify the soil geochemical data. To consider an improvement over PCA that uses the spatial relationships of the data, a classification based on MAF analysis was undertaken using the first 6 dominant factors. Understanding the relationship between soil geochemistry and superficial deposits is important for environmental monitoring of fragile ecosystems such as peat. To explore whether peat cover could be predicted from the classification, the lithology designation was adapted to include the presence of peat, based on GSNI superficial deposit polygons and linear discriminant analysis (LDA) undertaken. Prediction accuracy for LDA classification improved from 60.98% based on PCA using 10 principal components to 64.73% using MAF based on the 6 most dominant factors. The misclassification of peat may reflect degradation of peat covered areas since the creation of superficial deposit classification. Further work will examine the influence of underlying lithologies on elemental concentrations in peat composition and the effect of this in classification analysis.
Resumo:
Tide gauge data are identified as legacy data given the radical transition between observation method and required output format associated with tide gauges over the 20th-century. Observed water level variation through tide-gauge records is regarded as the only significant basis for determining recent historical variation (decade to century) in mean sea-level and storm surge. There are limited tide gauge records that cover the 20th century, such that the Belfast (UK) Harbour tide gauge would be a strategic long-term (110 years) record, if the full paper-based records (marigrams) were digitally restructured to allow for consistent data analysis. This paper presents the methodology of extracting a consistent time series of observed water levels from the 5 different Belfast Harbour tide gauges’ positions/machine types, starting late 1901. Tide-gauge data was digitally retrieved from the original analogue (daily) records by scanning the marigrams and then extracting the sequential tidal elevations with graph-line seeking software (Ungraph™). This automation of signal extraction allowed the full Belfast series to be retrieved quickly, relative to any manual x–y digitisation of the signal. Restructuring variably lengthed tidal data sets to a consistent daily, monthly and annual file format was undertaken by project-developed software: Merge&Convert and MergeHYD allow consistent water level sampling both at 60 min (past standard) and 10 min intervals, the latter enhancing surge measurement. Belfast tide-gauge data have been rectified, validated and quality controlled (IOC 2006 standards). The result is a consistent annual-based legacy data series for Belfast Harbour that includes over 2 million tidal-level data observations.
Resumo:
Purpose – The purpose of this paper is to present an analysis of media representation of business ethics within 62 international newspapers to explore the longitudinal and contextual evolution of business ethics and associated terminology. Levels of coverage and contextual analysis of the content of the articles are used as surrogate measures of the penetration of business ethics concepts into society. Design/methodology/approach – This paper uses a text mining application based on two samples of data: analysis of 62 national newspapers in 21 countries from 1990 to 2008; analysis of the content of two samples of articles containing the term business ethics (comprised of 100 newspaper articles spread over an 18-year period from a sample of US and UK newspapers). Findings – The paper demonstrates increased coverage of sustainability topics within the media over the last 18 years associated with events such as the Rio Summit. Whilst some peaks are associated with business ethics scandals, the overall coverage remains steady. There is little apparent use in the media of concepts such as corporate citizenship. The academic community and company ethical codes appear to adopt a wider definition of business ethics more akin to that associated with sustainability, in comparison with the focus taken by the media, especially in the USA. Coverage demonstrates clear regional bias and contextual analysis of the articles in the UK and USA also shows interesting parallels and divergences in the media representation of business ethics. Originality/value – A promising avenue to explore how the evolution of sustainability issues including business ethics can be tracked within a societal context.
Resumo:
Mycobacterium avium subsp. paratuberculosis causes paratuberculosis (Johne's disease) in ruminants in most countries. Historical data suggest substantial differences in culturability of M. avium subsp. paratuberculosis isolates from small ruminants and cattle; however, a systematic comparison of culture media and isolates from different countries and hosts has not been undertaken. Here, 35 field isolates from the United States, Spain, Northern Ireland, and Australia were propagated in Bactec 12B medium and Middlebrook 7H10 agar, genomically characterized, and subcultured to Lowenstein-Jensen (LJ), Herrold's egg yolk (HEY), modified Middlebrook 7H10, Middlebrook 7H11, and Watson-Reid (WR) agars, all with and without mycobactin J and some with sodium pyruvate. Fourteen genotypes of M. avium subsp. paratuberculosis were represented as determined by BstEII IS900 and IS1311 restriction fragment length polymorphism analysis. There was no correlation between genotype and overall culturability, although most S strains tended to grow poorly on HEY agar. Pyruvate was inhibitory to some isolates. All strains grew on modified Middlebrook 7H10 agar but more slowly and less prolifically on LJ agar. Mycobactin J was required for growth on all media except 7H11 agar, but growth was improved by the addition of mycobactin J to 7H11 agar. WR agar supported the growth of few isolates. The differences in growth of M. avium subsp. paratuberculosis that have historically been reported in diverse settings have been strongly influenced by the type of culture medium used. When an optimal culture medium, such as modified Middlebrook 7H10 agar, is used, very little difference between the growth phenotypes of diverse strains of M. avium subsp. paratuberculosis was observed. This optimal medium is recommended to remove bias in the isolation and cultivation of M. avium subsp. paratuberculosis.
Resumo:
This paper challenges the recent suggestion that a new financial elite has evolved which is able to capture substantial profit shares for itself. Specifically, it questions the assumption that new groups of financial intermediaries have increased in significance primarily because there is evidence that various types of financial speculators have played a similarly extensive role at several junctures of economic development. The paper then develops the alternative hypothesis that, rather than being a recent development, the rise of these financial intermediaries is a cyclical phenomenon which is linked to specific regimes of capital accumulation. The hypothesis is underpinned by historical data from the US National Income and Product Accounts for the period from 1930 to 2000, which suggest that the activities of `mainstream' financial intermediaries have been accompanied by the frequently countercyclical activities of a `speculative' sector of security and commodity brokers. Based on the combination of this qualitative and quantitative evidence, the paper concludes that the rise of a speculative financial sector is a potentially recurrent phenomenon which is linked to periods of economic restructuring and turmoil.
Resumo:
In this article we review recent work on the history of French negation in relation to three key issues in socio-historical linguistics: identifying appropriate sources, interpreting scant or anomalous data, and interpreting generational differences in historical data. We then turn to a new case study, that of verbal agreement with la plupart, to see whether this can shed fresh light on these issues. We argue that organising data according to the author’s date of birth is methodologically sounder than according to date of publication. We explore the extent to which different genres and text types reflect changing patterns of usage and suggest that additional, different case-studies are required in order to make more secure generalisations about the reliability of different sources.
Resumo:
Context. Comet 67P/Churyumov-Gerasimenko is the target of the European Space Agency Rosetta spacecraft rendez-vous mission. Detailed physical characteristation of the comet before arrival is important for mission planning as well as providing a test bed for ground-based observing and data-analysis methods. Aims: To conduct a long-term observational programme to characterize the physical properties of the nucleus of the comet, via ground-based optical photometry, and to combine our new data with all available nucleus data from the literature. Methods: We applied aperture photometry techniques on our imaging data and combined the extracted rotational lightcurves with data from the literature. Optical lightcurve inversion techniques were applied to constrain the spin state of the nucleus and its broad shape. We performed a detailed surface thermal analysis with the shape model and optical photometry by incorporating both into the new Advanced Thermophysical Model (ATPM), along with all available Spitzer 8-24 μm thermal-IR flux measurements from the literature. Results: A convex triangular-facet shape model was determined with axial ratios b/a = 1.239 and c/a = 0.819. These values can vary by as much as 7% in each axis and still result in a statistically significant fit to the observational data. Our best spin state solution has Psid = 12.76137 ± 0.00006 h, and a rotational pole orientated at Ecliptic coordinates λ = 78°(±10°), β = + 58°(±10°). The nucleus phase darkening behaviour was measured and best characterized using the IAU HG system. Best fit parameters are: G = 0.11 ± 0.12 and HR(1,1,0) = 15.31 ± 0.07. Our shape model combined with the ATPM can satisfactorily reconcile all optical and thermal-IR data, with the fit to the Spitzer 24 μm data taken in February 2004 being exceptionally good. We derive a range of mutually-consistent physical parameters for each thermal-IR data set, including effective radius, geometric albedo, surface thermal inertia and roughness fraction. Conclusions: The overall nucleus dimensions are well constrained and strongly imply a broad nucleus shape more akin to comet 9P/Tempel 1, rather than the highly elongated or "bi-lobed" nuclei seen for comets 103P/Hartley 2 or 8P/Tuttle. The derived low thermal inertia of
Resumo:
Recent technological advances have increased the quantity of movement data being recorded. While valuable knowledge can be gained by analysing such data, its sheer volume creates challenges. Geovisual analytics, which helps the human cognition process by using tools to reason about data, offers powerful techniques to resolve these challenges. This paper introduces such a geovisual analytics environment for exploring movement trajectories, which provides visualisation interfaces, based on the classic space-time cube. Additionally, a new approach, using the mathematical description of motion within a space-time cube, is used to determine the similarity of trajectories and forms the basis for clustering them. These techniques were used to analyse pedestrian movement. The results reveal interesting and useful spatiotemporal patterns and clusters of pedestrians exhibiting similar behaviour.
Resumo:
Research over the past two decades on the Holocene sediments from the tide dominated west side of the lower Ganges delta has focussed on constraining the sedimentary environment through grain size distributions (GSD). GSD has traditionally been assessed through the use of probability density function (PDF) methods (e.g. log-normal, log skew-Laplace functions), but these approaches do not acknowledge the compositional nature of the data, which may compromise outcomes in lithofacies interpretations. The use of PDF approaches in GSD analysis poses a series of challenges for the development of lithofacies models, such as equifinal distribution coefficients and obscuring the empirical data variability. In this study a methodological framework for characterising GSD is presented through compositional data analysis (CODA) plus a multivariate statistical framework. This provides a statistically robust analysis of the fine tidal estuary sediments from the West Bengal Sundarbans, relative to alternative PDF approaches.
Resumo:
Context. The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. PESSTO classifies transients from publicly available sources and wide-field surveys, and selects science targets for detailed spectroscopic and photometric follow-up. PESSTO runs for nine months of the year, January - April and August - December inclusive, and typically has allocations of 10 nights per month.
Aims. We describe the data reduction strategy and data products that are publicly available through the ESO archive as the Spectroscopic Survey data release 1 (SSDR1).
Methods. PESSTO uses the New Technology Telescope with the instruments EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5<sup>m</sup> for classification. Science targets are selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. We use standard EFOSC2 set-ups providing spectra with resolutions of 13-18 Å between 3345-9995 Å. A subset of the brighter science targets are selected for SOFI spectroscopy with the blue and red grisms (0.935-2.53 μm and resolutions 23-33 Å) and imaging with broadband JHK<inf>s</inf> filters.
Results. This first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013). A total of 221 confirmed supernovae were classified, and we released calibrated optical spectra and classifications publicly within 24 h of the data being taken (via WISeREP). The data in SSDR1 replace those released spectra. They have more reliable and quantifiable flux calibrations, correction for telluric absorption, and are made available in standard ESO Phase 3 formats. We estimate the absolute accuracy of the flux calibrations for EFOSC2 across the whole survey in SSDR1 to be typically ∼15%, although a number of spectra will have less reliable absolute flux calibration because of weather and slit losses. Acquisition images for each spectrum are available which, in principle, can allow the user to refine the absolute flux calibration. The standard NIR reduction process does not produce high accuracy absolute spectrophotometry but synthetic photometry with accompanying JHK<inf>s</inf> imaging can improve this. Whenever possible, reduced SOFI images are provided to allow this.
Conclusions. Future data releases will focus on improving the automated flux calibration of the data products. The rapid turnaround between discovery and classification and access to reliable pipeline processed data products has allowed early science papers in the first few months of the survey.
Resumo:
This paper outlines a forensic method for analysing the energy, environmental and comfort performance of a building. The method has been applied to a recently developed event space in an Irish public building, which was evaluated using on-site field studies, data analysis, building simulation and occupant surveying. The method allows for consideration of both the technological and anthropological aspects of the building in use and for the identification of unsustainable operational practice and emerging problems. The forensic analysis identified energy savings of up to 50%, enabling a more sustainable, lower-energy operational future for the building. The building forensic analysis method presented in this paper is now planned for use in other public and commercial buildings.
Resumo:
We consider the problem of regulating the rate of harvesting a natural resource, taking account of the wider system represented by a set of ecological and economic indicators, given differing stakeholder priorities. This requires objective and transparent decision making to show how indicators impinge on the resulting regulation decision. We offer a new scheme for combining indicators, derived from assessing the suitability of lowering versus not lowering the harvest rate based on indicator values relative to their predefined reference levels. Using the practical example of fisheries management under an “ecosystem approach,” we demonstrate how different stakeholder views can be quantitatively represented by weighting sets applied to these comparisons. Using the scheme in an analysis of historical data from the Celtic Sea fisheries, we find great scope for negotiating agreement among disparate stakeholders.
Resumo:
Perfect information is seldom available to man or machines due to uncertainties inherent in real world problems. Uncertainties in geographic information systems (GIS) stem from either vague/ambiguous or imprecise/inaccurate/incomplete information and it is necessary for GIS to develop tools and techniques to manage these uncertainties. There is a widespread agreement in the GIS community that although GIS has the potential to support a wide range of spatial data analysis problems, this potential is often hindered by the lack of consistency and uniformity. Uncertainties come in many shapes and forms, and processing uncertain spatial data requires a practical taxonomy to aid decision makers in choosing the most suitable data modeling and analysis method. In this paper, we: (1) review important developments in handling uncertainties when working with spatial data and GIS applications; (2) propose a taxonomy of models for dealing with uncertainties in GIS; and (3) identify current challenges and future research directions in spatial data analysis and GIS for managing uncertainties.
Resumo:
A substantial proportion of aetiological risks for many cancers and chronic diseases remain unexplained. Using geochemical soil and stream water samples collected as part of the Tellus Project studies, current research is investigating naturally occurring background levels of potentially toxic elements (PTEs) in soils and stream sediments and their possible relationship with progressive chronic kidney disease (CKD). The Tellus geological mapping project, Geological Survey Northern Ireland, collected soil sediment and stream water samples on a grid of one sample site every 2 km2 across the rural areas of Northern Ireland resulting in an excess of 6800 soil sampling locations and more than 5800 locations for stream water sampling. Accumulation of several PTEs including arsenic, cadmium, chromium, lead and mercury have been linked with human health and implicated in renal function decline. The hypothesis is that long-term exposure will result in cumulative exposure to PTEs and act as risk factor(s) for cancer and diabetes related CKD and its progression. The ‘bioavailable’ fraction of total PTE soil concentration depends on the ‘bioaccessible’ proportion through an exposure pathway. Recent work has explored this bioaccessible fraction for a range of PTEs across Northern Ireland. In this study the compositional nature of the multivariate geochemical PTE variables and bioaccessible data is explored to augment the investigation into the potential relationship between PTEs, bioaccessibility and disease data.