93 resultados para Mesh generation from image data
Resumo:
A parallel pipelined array of cells suitable for real-time computation of histograms is proposed. The cell architecture builds on previous work obtained via C-slow retiming techniques and can be clocked at 65 percent faster frequency than previous arrays. The new arrays can be exploited for higher throughput particularly when dual data rate sampling techniques are used to operate on single streams of data from image sensors. In this way, the new cell operates on a p-bit data bus which is more convenient for interfacing to camera sensors or to microprocessors in consumer digital cameras.
Exploring socioeconomic impacts of forest based mitigation projects: Lessons from Brazil and Bolivia
Resumo:
This paper aims to contribute new insights globally and regionally on how carbon forest mitigation contributes to sustainable development in South America. Carbon finance has emerged as a potential policy option to tackling global climate change, degradation of forests, and social development in poor countries. This paper focuses on evaluating the socioeconomic impacts of a set of forest based mitigation pilot projects that emerged under the United Nations Framework Convention on Climate Change. The paper reviews research conducted in 2001–2002, drawing from empirical data from four pilot projects, derived from qualitative stakeholder interviews, and complemented by policy documents and literature. Of the four projects studied three are located in frontier areas, where there are considerable pressures for conversion of standing forest to agriculture. In this sense, forest mitigation projects have a substantial role to play in the region. Findings suggest however, that all four projects have experienced cumbersome implementation processes specifically, due to weak social objectives, poor communication, as well as time constraints. In three out of four cases, stakeholders highlighted limited local acceptance at the implementation stages. In the light of these findings, we discuss opportunities for implementation of future forest based mitigation projects in the land use sector.
Resumo:
Palaeodata in synthesis form are needed as benchmarks for the Palaeoclimate Modelling Intercomparison Project (PMIP). Advances since the last synthesis of terrestrial palaeodata from the last glacial maximum (LGM) call for a new evaluation, especially of data from the tropics. Here pollen, plant-macrofossil, lake-level, noble gas (from groundwater) and δ18O (from speleothems) data are compiled for 18±2 ka (14C), 32 °N–33 °S. The reliability of the data was evaluated using explicit criteria and some types of data were re-analysed using consistent methods in order to derive a set of mutually consistent palaeoclimate estimates of mean temperature of the coldest month (MTCO), mean annual temperature (MAT), plant available moisture (PAM) and runoff (P-E). Cold-month temperature (MAT) anomalies from plant data range from −1 to −2 K near sea level in Indonesia and the S Pacific, through −6 to −8 K at many high-elevation sites to −8 to −15 K in S China and the SE USA. MAT anomalies from groundwater or speleothems seem more uniform (−4 to −6 K), but the data are as yet sparse; a clear divergence between MAT and cold-month estimates from the same region is seen only in the SE USA, where cold-air advection is expected to have enhanced cooling in winter. Regression of all cold-month anomalies against site elevation yielded an estimated average cooling of −2.5 to −3 K at modern sea level, increasing to ≈−6 K by 3000 m. However, Neotropical sites showed larger than the average sea-level cooling (−5 to −6 K) and a non-significant elevation effect, whereas W and S Pacific sites showed much less sea-level cooling (−1 K) and a stronger elevation effect. These findings support the inference that tropical sea-surface temperatures (SSTs) were lower than the CLIMAP estimates, but they limit the plausible average tropical sea-surface cooling, and they support the existence of CLIMAP-like geographic patterns in SST anomalies. Trends of PAM and lake levels indicate wet LGM conditions in the W USA, and at the highest elevations, with generally dry conditions elsewhere. These results suggest a colder-than-present ocean surface producing a weaker hydrological cycle, more arid continents, and arguably steeper-than-present terrestrial lapse rates. Such linkages are supported by recent observations on freezing-level height and tropical SSTs; moreover, simulations of “greenhouse” and LGM climates point to several possible feedback processes by which low-level temperature anomalies might be amplified aloft.
Resumo:
Owing to continuous advances in the computational power of handheld devices like smartphones and tablet computers, it has become possible to perform Big Data operations including modern data mining processes onboard these small devices. A decade of research has proved the feasibility of what has been termed as Mobile Data Mining, with a focus on one mobile device running data mining processes. However, it is not before 2010 until the authors of this book initiated the Pocket Data Mining (PDM) project exploiting the seamless communication among handheld devices performing data analysis tasks that were infeasible until recently. PDM is the process of collaboratively extracting knowledge from distributed data streams in a mobile computing environment. This book provides the reader with an in-depth treatment on this emerging area of research. Details of techniques used and thorough experimental studies are given. More importantly and exclusive to this book, the authors provide detailed practical guide on the deployment of PDM in the mobile environment. An important extension to the basic implementation of PDM dealing with concept drift is also reported. In the era of Big Data, potential applications of paramount importance offered by PDM in a variety of domains including security, business and telemedicine are discussed.
Resumo:
We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.
Resumo:
Reliable evidence of trends in the illegal ivory trade is important for informing decision making for elephants but it is difficult to obtain due to the covert nature of the trade. The Elephant Trade Information System, a global database of reported seizures of illegal ivory, holds the only extensive information on illicit trade available. However inherent biases in seizure data make it difficult to infer trends; countries differ in their ability to make and report seizures and these differences cannot be directly measured. We developed a new modelling framework to provide quantitative evidence on trends in the illegal ivory trade from seizures data. The framework used Bayesian hierarchical latent variable models to reduce bias in seizures data by identifying proxy variables that describe the variability in seizure and reporting rates between countries and over time. Models produced bias-adjusted smoothed estimates of relative trends in illegal ivory activity for raw and worked ivory in three weight classes. Activity is represented by two indicators describing the number of illegal ivory transactions--Transactions Index--and the total weight of illegal ivory transactions--Weights Index--at global, regional or national levels. Globally, activity was found to be rapidly increasing and at its highest level for 16 years, more than doubling from 2007 to 2011 and tripling from 1998 to 2011. Over 70% of the Transactions Index is from shipments of worked ivory weighing less than 10 kg and the rapid increase since 2007 is mainly due to increased consumption in China. Over 70% of the Weights Index is from shipments of raw ivory weighing at least 100 kg mainly moving from Central and East Africa to Southeast and East Asia. The results tie together recent findings on trends in poaching rates, declining populations and consumption and provide detailed evidence to inform international decision making on elephants.
Resumo:
Sea surface temperature (SST) datasets have been generated from satellite observations for the period 1991–2010, intended for use in climate science applications. Attributes of the datasets specifically relevant to climate applications are: first, independence from in situ observations; second, effort to ensure homogeneity and stability through the time-series; third, context-specific uncertainty estimates attached to each SST value; and, fourth, provision of estimates of both skin SST (the fundamental measure- ment, relevant to air-sea fluxes) and SST at standard depth and local time (partly model mediated, enabling comparison with his- torical in situ datasets). These attributes in part reflect requirements solicited from climate data users prior to and during the project. Datasets consisting of SSTs on satellite swaths are derived from the Along-Track Scanning Radiometers (ATSRs) and Advanced Very High Resolution Radiometers (AVHRRs). These are then used as sole SST inputs to a daily, spatially complete, analysis SST product, with a latitude-longitude resolution of 0.05°C and good discrimination of ocean surface thermal features. A product user guide is available, linking to reports describing the datasets’ algorithmic basis, validation results, format, uncer- tainty information and experimental use in trial climate applications. Future versions of the datasets will span at least 1982–2015, better addressing the need in many climate applications for stable records of global SST that are at least 30 years in length.
Resumo:
he perspective European Supergrid would consist of an integrated power system network, where electricity demands from one country could be met by generation from another country. This paper makes use of a bi-linear fixed-effects model to analyse the determinants for trading electricity across borders among 34 countries connected by the European Supergrid. The key question that this paper aims to address is the extent to which the privatisation of European electricity markets has brought about higher cross-border trade of electricity. The analysis makes use of distance, price ratios, gate closure times, size of peaks and aggregate demand as standard determinants. Controlling for other standard determinants, it is concluded that privatisation in most cases led to higher power exchange and that the benefits are more significant where privatisation measures have been in place for a longer period.
Resumo:
The REgents PARk and Tower Environmental Experiment (REPARTEE) comprised two campaigns in London in October 2006 and October/November 2007. The experiment design involved measurements at a heavily trafficked roadside site, two urban background sites and an elevated site at 160–190 m above ground on the BT Tower, supplemented in the second campaign by Doppler lidar measurements of atmospheric vertical structure. A wide range of measurements of airborne particle physical metrics and chemical composition were made as well as measurements of a considerable range of gas phase species and the fluxes of both particulate and gas phase substances. Significant findings include (a) demonstration of the evaporation of traffic-generated nanoparticles during both horizontal and vertical atmospheric transport; (b) generation of a large base of information on the fluxes of nanoparticles, accumulation mode particles and specific chemical components of the aerosol and a range of gas phase species, as well as the elucidation of key processes and comparison with emissions inventories; (c) quantification of vertical gradients in selected aerosol and trace gas species which has demonstrated the important role of regional transport in influencing concentrations of sulphate, nitrate and secondary organic compounds within the atmosphere of London; (d) generation of new data on the atmospheric structure and turbulence above London, including the estimation of mixed layer depths; (e) provision of new data on trace gas dispersion in the urban atmosphere through the release of purposeful tracers; (f) the determination of spatial differences in aerosol particle size distributions and their interpretation in terms of sources and physico-chemical transformations; (g) studies of the nocturnal oxidation of nitrogen oxides and of the diurnal behaviour of nitrate aerosol in the urban atmosphere, and (h) new information on the chemical composition and source apportionment of particulate matter size fractions in the atmosphere of London derived both from bulk chemical analysis and aerosol mass spectrometry with two instrument types.
Resumo:
This paper details a strategy for modifying the source code of a complex model so that the model may be used in a data assimilation context, {and gives the standards for implementing a data assimilation code to use such a model}. The strategy relies on keeping the model separate from any data assimilation code, and coupling the two through the use of Message Passing Interface (MPI) {functionality}. This strategy limits the changes necessary to the model and as such is rapid to program, at the expense of ultimate performance. The implementation technique is applied in different models with state dimension up to $2.7 \times 10^8$. The overheads added by using this implementation strategy in a coupled ocean-atmosphere climate model are shown to be an order of magnitude smaller than the addition of correlated stochastic random errors necessary for some nonlinear data assimilation techniques.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
Operational forecasting centres are currently developing data assimilation systems for coupled atmosphere-ocean models. Strongly coupled assimilation, in which a single assimilation system is applied to a coupled model, presents significant technical and scientific challenges. Hence weakly coupled assimilation systems are being developed as a first step, in which the coupled model is used to compare the current state estimate with observations, but corrections to the atmosphere and ocean initial conditions are then calculated independently. In this paper we provide a comprehensive description of the different coupled assimilation methodologies in the context of four dimensional variational assimilation (4D-Var) and use an idealised framework to assess the expected benefits of moving towards coupled data assimilation. We implement an incremental 4D-Var system within an idealised single column atmosphere-ocean model. The system has the capability to run both strongly and weakly coupled assimilations as well as uncoupled atmosphere or ocean only assimilations, thus allowing a systematic comparison of the different strategies for treating the coupled data assimilation problem. We present results from a series of identical twin experiments devised to investigate the behaviour and sensitivities of the different approaches. Overall, our study demonstrates the potential benefits that may be expected from coupled data assimilation. When compared to uncoupled initialisation, coupled assimilation is able to produce more balanced initial analysis fields, thus reducing initialisation shock and its impact on the subsequent forecast. Single observation experiments demonstrate how coupled assimilation systems are able to pass information between the atmosphere and ocean and therefore use near-surface data to greater effect. We show that much of this benefit may also be gained from a weakly coupled assimilation system, but that this can be sensitive to the parameters used in the assimilation.
Resumo:
We utilized an ecosystem process model (SIPNET, simplified photosynthesis and evapotranspiration model) to estimate carbon fluxes of gross primary productivity and total ecosystem respiration of a high-elevation coniferous forest. The data assimilation routine incorporated aggregated twice-daily measurements of the net ecosystem exchange of CO2 (NEE) and satellite-based reflectance measurements of the fraction of absorbed photosynthetically active radiation (fAPAR) on an eight-day timescale. From these data we conducted a data assimilation experiment with fifteen different combinations of available data using twice-daily NEE, aggregated annual NEE, eight-day f AP AR, and average annual fAPAR. Model parameters were conditioned on three years of NEE and fAPAR data and results were evaluated to determine the information content from the different combinations of data streams. Across the data assimilation experiments conducted, model selection metrics such as the Bayesian Information Criterion and Deviance Information Criterion obtained minimum values when assimilating average annual fAPAR and twice-daily NEE data. Application of wavelet coherence analyses showed higher correlations between measured and modeled fAPAR on longer timescales ranging from 9 to 12 months. There were strong correlations between measured and modeled NEE (R2, coefficient of determination, 0.86), but correlations between measured and modeled eight-day fAPAR were quite poor (R2 = −0.94). We conclude that this inability to determine fAPAR on eight-day timescale would improve with the considerations of the radiative transfer through the plant canopy. Modeled fluxes when assimilating average annual fAPAR and annual NEE were comparable to corresponding results when assimilating twice-daily NEE, albeit at a greater uncertainty. Our results support the conclusion that for this coniferous forest twice-daily NEE data are a critical measurement stream for the data assimilation. The results from this modeling exercise indicate that for this coniferous forest, average annuals for satellite-based fAPAR measurements paired with annual NEE estimates may provide spatial detail to components of ecosystem carbon fluxes in proximity of eddy covariance towers. Inclusion of other independent data streams in the assimilation will also reduce uncertainty on modeled values.
Resumo:
In this invited article the authors present an evaluative report on the development of the MESHGuides project (http://www.meshguides.org/). MESHGuides’ objective is to provide education with an international knowledge management system. MESHGuides were conceived as research summaries for supporting teachers’ in developing evidence-based practice. Their aim is to enhance teachers’ capacity to engage actively with research in their own classrooms. The original thinking for MESH arose from the work of UK-based academics Professor Marilyn Leask and Dr Sarah Younie in response to a desire, which has recently gathered momentum in the UK, for the development of a more research-informed teaching profession and for the establishment of an on-line platform to support evidence-based practice (DfE, 2015; Leask and Younie 2001; OECD 2009). The focus of this article is on how the MESHGuides project was conceived and structured, the technical systems supporting it and the practical reality for academics and teachers of composing and using MESHGuides. The project and the guides are in the early stages of development, and discussion indicates future possibilities for more global engagement with this knowledge management system.
Resumo:
Periocular recognition has recently become an active topic in biometrics. Typically it uses 2D image data of the periocular region. This paper is the first description of combining 3D shape structure with 2D texture. A simple and effective technique using iterative closest point (ICP) was applied for 3D periocular region matching. It proved its strength for relatively unconstrained eye region capture, and does not require any training. Local binary patterns (LBP) were applied for 2D image based periocular matching. The two modalities were combined at the score-level. This approach was evaluated using the Bosphorus 3D face database, which contains large variations in facial expressions, head poses and occlusions. The rank-1 accuracy achieved from the 3D data (80%) was better than that for 2D (58%), and the best accuracy (83%) was achieved by fusing the two types of data. This suggests that significant improvements to periocular recognition systems could be achieved using the 3D structure information that is now available from small and inexpensive sensors.