78 resultados para data types and operators
Resumo:
The majority of vegetation reconstructions from the Neotropics are derived from fossil pollen records extracted from lake sediments. However, the interpretation of these records is restricted by limited knowledge of the contemporary relationships between the vegetation and pollen rain of Neotropical ecosystems, especially for more open vegetation such as savannas. This research aims to improve the interpretation of these records by investigating the vegetation and modern pollen rain of different savanna ecosystems in Bolivia using vegetation inventories, artificial pollen traps and surface lake sediments. Two types of savanna were studied, upland savannas (cerrado), occurring on well drained soils, and seasonally-inundated savannas occurring on seasonally water-logged soils. Quantitative vegetation data are used to identify taxa that are floristically important in the different savanna types and to allow modern pollen/vegetation ratios to be calculated. Artificial pollen traps from the upland savanna site are dominated by Moraceae (35%), Poaceae (30%), Alchornea (6%) and Cecropia (4%). The two seasonally-inundated savanna sites are dominated by Moraceae (37%), Poaceae (20%), Alchornea (8%) and Cecropia (7%), and Moraceae (25%), Cyperaceae (22%), Poaceae (19%) and Cecropia (9%), respectively. The modern pollen rain of seasonally-inundated savannas from surface lake sediments is dominated by Cyperaceae (35%), Poaceae (33%), Moraceae (9%) and Asteraceae (5%). Upland and seasonally-flooded savannas were found to be only subtly distinct from each other palynologically. All sites have a high proportion of Moraceae pollen due to effective wind dispersal of this pollen type from areas of evergreen forest close to the study sites. Modern pollen/vegetation ratios show that many key woody plant taxa are absent/under-represented in the modern pollen rain (e.g., Caryocar and Tabebuia). The lower-than-expected percentages of Poaceae pollen, and the scarcity of savanna indicators, in the modern pollen rain of these ecosystems mean that savannas could potentially be overlooked in fossil pollen records without consideration of the full pollen spectrum available.
Resumo:
In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models.
Resumo:
The purpose of this study was to develop an understanding of the current state of scientific data sharing that stakeholders could use to develop and implement effective data sharing strategies and policies. The study developed a conceptual model to describe the process of data sharing, and the drivers, barriers, and enablers that determine stakeholder engagement. The conceptual model was used as a framework to structure discussions and interviews with key members of all stakeholder groups. Analysis of data obtained from interviewees identified a number of themes that highlight key requirements for the development of a mature data sharing culture.
Resumo:
Data assimilation (DA) systems are evolving to meet the demands of convection-permitting models in the field of weather forecasting. On 19 April 2013 a special interest group meeting of the Royal Meteorological Society brought together UK researchers looking at different aspects of the data assimilation problem at high resolution, from theory to applications, and researchers creating our future high resolution observational networks. The meeting was chaired by Dr Sarah Dance of the University of Reading and Dr Cristina Charlton-Perez from the MetOffice@Reading. The purpose of the meeting was to help define the current state of high resolution data assimilation in the UK. The workshop assembled three main types of scientists: observational network specialists, operational numerical weather prediction researchers and those developing the fundamental mathematical theory behind data assimilation and the underlying models. These three working areas are intrinsically linked; therefore, a holistic view must be taken when discussing the potential to make advances in high resolution data assimilation.
Resumo:
Population modelling is increasingly recognised as a useful tool for pesticide risk assessment. For vertebrates that may ingest pesticides with their food, such as woodpigeon (Columba palumbus), population models that simulate foraging behaviour explicitly can help predicting both exposure and population-level impact. Optimal foraging theory is often assumed to explain the individual-level decisions driving distributions of individuals in the field, but it may not adequately predict spatial and temporal characteristics of woodpigeon foraging because of the woodpigeons’ excellent memory, ability to fly long distances, and distinctive flocking behaviour. Here we present an individual-based model (IBM) of the woodpigeon. We used the model to predict distributions of foraging woodpigeons that use one of six alternative foraging strategies: optimal foraging, memory-based foraging and random foraging, each with or without flocking mechanisms. We used pattern-oriented modelling to determine which of the foraging strategies is best able to reproduce observed data patterns. Data used for model evaluation were gathered during a long-term woodpigeon study conducted between 1961 and 2004 and a radiotracking study conducted in 2003 and 2004, both in the UK, and are summarised here as three complex patterns: the distributions of foraging birds between vegetation types during the year, the number of fields visited daily by individuals, and the proportion of fields revisited by them on subsequent days. The model with a memory-based foraging strategy and a flocking mechanism was the only one to reproduce these three data patterns, and the optimal foraging model produced poor matches to all of them. The random foraging strategy reproduced two of the three patterns but was not able to guarantee population persistence. We conclude that with the memory-based foraging strategy including a flocking mechanism our model is realistic enough to estimate the potential exposure of woodpigeons to pesticides. We discuss how exposure can be linked to our model, and how the model could be used for risk assessment of pesticides, for example predicting exposure and effects in heterogeneous landscapes planted seasonally with a variety of crops, while accounting for differences in land use between landscapes.
Resumo:
Using five climate model simulations of the response to an abrupt quadrupling of CO2, the authors perform the first simultaneous model intercomparison of cloud feedbacks and rapid radiative adjustments with cloud masking effects removed, partitioned among changes in cloud types and gross cloud properties. Upon CO2 quadrupling, clouds exhibit a rapid reduction in fractional coverage, cloud-top pressure, and optical depth, with each contributing equally to a 1.1 W m−2 net cloud radiative adjustment, primarily from shortwave radiation. Rapid reductions in midlevel clouds and optically thick clouds are important in reducing planetary albedo in every model. As the planet warms, clouds become fewer, higher, and thicker, and global mean net cloud feedback is positive in all but one model and results primarily from increased trapping of longwave radiation. As was true for earlier models, high cloud changes are the largest contributor to intermodel spread in longwave and shortwave cloud feedbacks, but low cloud changes are the largest contributor to the mean and spread in net cloud feedback. The importance of the negative optical depth feedback relative to the amount feedback at high latitudes is even more marked than in earlier models. The authors show that the negative longwave cloud adjustment inferred in previous studies is primarily caused by a 1.3 W m−2 cloud masking of CO2 forcing. Properly accounting for cloud masking increases net cloud feedback by 0.3 W m−2 K−1, whereas accounting for rapid adjustments reduces by 0.14 W m−2 K−1 the ensemble mean net cloud feedback through a combination of smaller positive cloud amount and altitude feedbacks and larger negative optical depth feedbacks.
Resumo:
In 1984 and 1985 a series of experiments was undertaken in which dayside ionospheric flows were measured by the EISCAT “Polar” experiment, while observations of the solar wind and interplanetary magnetic field (IMF) were made by the AMPTE UKS and IRM spacecraft upstream from the Earth's bow shock. As a result, 40 h of simultaneous data were acquired, which are analysed in this paper to investigate the relationship between the ionospheric flow and the North-South (Bz) component of the IMF. The ionospheric flow data have 2.5 min resolution, and cover the dayside local time sector from ∼ 09:30 to ∼ 18:30 M.L.T. and the latitude range from 70.8° to 74.3°. Using cross-correlation analysis it is shown that clear relationships do exist between the ionospheric flow and IMF Bz, but that the form of the relations depends strongly on latitude and local time. These dependencies are readily interpreted in terms of a twinvortex flow pattern in which the magnitude and latitudinal extent of the flows become successively larger as Bz becomes successively more negative. Detailed maps of the flow are derived for a range of Bz values (between ± 4 nT) which clearly demonstrate the presence of these effects in the data. The data also suggest that the morning reversal in the East-West component of flow moves to earlier local times as Bz, declines in value and becomes negative. The correlation analysis also provides information on the ionospheric response time to changes in IMF Bz, it being found that the response is very rapid indeed. The most rapid response occurs in the noon to mid-afternoon sector, where the westward flows of the dusk cell respond with a delay of 3.9 ± 2.2 min to changes in the North-South field at the subsolar magnetopause. The flows appear to evolve in form over the subsequent ~ 5 min interval, however, as indicated by the longer response times found for the northward component of flow in this sector (6.7 ±2.2 min), and in data from earlier and later local times. No evidence is found for a latitudinal gradient in response time; changes in flow take place coherently in time across the entire radar field-of-view.
Resumo:
As we enter an era of ‘big data’, asset information is becoming a deliverable of complex projects. Prior research suggests digital technologies enable rapid, flexible forms of project organizing. This research analyses practices of managing change in Airbus, CERN and Crossrail, through desk-based review, interviews, visits and a cross-case workshop. These organizations deliver complex projects, rely on digital technologies to manage large data-sets; and use configuration management, a systems engineering approach with mid-20th century origins, to establish and maintain integrity. In them, configuration management has become more, rather than less, important. Asset information is structured, with change managed through digital systems, using relatively hierarchical, asynchronous and sequential processes. The paper contributes by uncovering limits to flexibility in complex projects where integrity is important. Challenges of managing change are discussed, considering the evolving nature of configuration management; potential use of analytics on complex projects; and implications for research and practice.
Resumo:
The past years have shown an enormous advancement in sequencing and array-based technologies, producing supplementary or alternative views of the genome stored in various formats and databases. Their sheer volume and different data scope pose a challenge to jointly visualize and integrate diverse data types. We present AmalgamScope a new interactive software tool focusing on assisting scientists with the annotation of the human genome and particularly the integration of the annotation files from multiple data types, using gene identifiers and genomic coordinates. Supported platforms include next-generation sequencing and microarray technologies. The available features of AmalgamScope range from the annotation of diverse data types across the human genome to integration of the data based on the annotational information and visualization of the merged files within chromosomal regions or the whole genome. Additionally, users can define custom transcriptome library files for any species and use the file exchanging distant server options of the tool.
Resumo:
Environment monitoring applications using Wireless Sensor Networks (WSNs) have had a lot of attention in recent years. In much of this research tasks like sensor data processing, environment states and events decision making and emergency message sending are done by a remote server. A proposed cross layer protocol for two different applications where, reliability for delivered data, delay and life time of the network need to be considered, has been simulated and the results are presented in this paper. A WSN designed for the proposed applications needs efficient MAC and routing protocols to provide a guarantee for the reliability of the data delivered from source nodes to the sink. A cross layer based on the design given in [1] has been extended and simulated for the proposed applications, with new features, such as routes discovery algorithms added. Simulation results show that the proposed cross layer based protocol can conserve energy for nodes and provide the required performance such as life time of the network, delay and reliability.
Resumo:
This paper presents preliminary results from an ethnoarchaeological study of animal husbandry in the modern village of Bestansur, situated in the lower Zagros Mountains of Iraqi Kurdistan. This research explores how modern families use and manage their livestock within the local landscape and identifies traces of this use. The aim is to provide the groundwork for future archaeological investigations focusing on the nearby Neolithic site of Bestansur. This is based on the premise that modern behaviours can suggest testable patterns for past practices within the same functional and ecological domains. Semi-structured interviews conducted with villagers from several households provided large amounts of information on modern behaviours that helped direct data collection, and which also illustrate notable shifts in practices and use of the local landscape over time. Strontium isotope analysis of modern plant material demonstrates that a measurable variation exists between the alluvial floodplain and the lower foothills, while analysis of modern dung samples shows clear variation between sheep/goat and cow dung, in terms of numbers of faecal spherulites. These results are specific to the local environment of Bestansur and can be used for evaluating and contextualising archaeological evidence as well as providing modern reference material for comparative purposes.
Resumo:
The Land surface Processes and eXchanges (LPX) model is a fire-enabled dynamic global vegetation model that performs well globally but has problems representing fire regimes and vegetative mix in savannas. Here we focus on improving the fire module. To improve the representation of ignitions, we introduced a reatment of lightning that allows the fraction of ground strikes to vary spatially and seasonally, realistically partitions strike distribution between wet and dry days, and varies the number of dry days with strikes. Fuel availability and moisture content were improved by implementing decomposition rates specific to individual plant functional types and litter classes, and litter drying rates driven by atmospheric water content. To improve water extraction by grasses, we use realistic plant-specific treatments of deep roots. To improve fire responses, we introduced adaptive bark thickness and post-fire resprouting for tropical and temperate broadleaf trees. All improvements are based on extensive analyses of relevant observational data sets. We test model performance for Australia, first evaluating parameterisations separately and then measuring overall behaviour against standard benchmarks. Changes to the lightning parameterisation produce a more realistic simulation of fires in southeastern and central Australia. Implementation of PFT-specific decomposition rates enhances performance in central Australia. Changes in fuel drying improve fire in northern Australia, while changes in rooting depth produce a more realistic simulation of fuel availability and structure in central and northern Australia. The introduction of adaptive bark thickness and resprouting produces more realistic fire regimes in Australian savannas. We also show that the model simulates biomass recovery rates consistent with observations from several different regions of the world characterised by resprouting vegetation. The new model (LPX-Mv1) produces an improved simulation of observed vegetation composition and mean annual burnt area, by 33 and 18% respectively compared to LPX.
Resumo:
Peak residential electricity demand takes place when people conduct simultaneous activities at specific times of the day. Social practices generate patterns of demand and can help understand why, where, with whom and when energy services are used at peak time. The aim of this work is to make use of recent UK time use and locational data to better understand: (i) how a set of component indices on synchronisation, variation, sharing and mobility indicate flexibility to shift demand; and (ii) the links between people’s activities and peaks in greenhouse gases’ intensities. The analysis is based on a recent UK time use dataset, providing 1 minute interval data from GPS devices and 10 minute data from diaries and questionnaires for 175 data days comprising 153 respondents. Findings show how greenhouse gases’ intensities and flexibility to shift activities vary throughout the day. Morning peaks are characterised by high levels of synchronisation, shared activities and occupancy, with low variation of activities. Evening peaks feature low synchronisation, and high spatial mobility variation of activities. From a network operator perspective, the results indicate that periods with lower flexibility may be prone to more significant local network loads due to the synchronization of electricity-demanding activities.
Resumo:
For users of climate services, the ability to quickly determine the datasets that best fit one's needs would be invaluable. The volume, variety and complexity of climate data makes this judgment difficult. The ambition of CHARMe ("Characterization of metadata to enable high-quality climate services") is to give a wider interdisciplinary community access to a range of supporting information, such as journal articles, technical reports or feedback on previous applications of the data. The capture and discovery of this "commentary" information, often created by data users rather than data providers, and currently not linked to the data themselves, has not been significantly addressed previously. CHARMe applies the principles of Linked Data and open web standards to associate, record, search and publish user-derived annotations in a way that can be read both by users and automated systems. Tools have been developed within the CHARMe project that enable annotation capability for data delivery systems already in wide use for discovering climate data. In addition, the project has developed advanced tools for exploring data and commentary in innovative ways, including an interactive data explorer and comparator ("CHARMe Maps") and a tool for correlating climate time series with external "significant events" (e.g. instrument failures or large volcanic eruptions) that affect the data quality. Although the project focuses on climate science, the concepts are general and could be applied to other fields. All CHARMe system software is open-source, released under a liberal licence, permitting future projects to re-use the source code as they wish.