963 resultados para Link variables method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

ENVISAT ASAR WSM images with pixel size 150 × 150 m, acquired in different meteorological, oceanographic and sea ice conditions were used to determined icebergs in the Amundsen Sea (Antarctica). An object-based method for automatic iceberg detection from SAR data has been developed and applied. The object identification is based on spectral and spatial parameters on 5 scale levels, and was verified with manual classification in four polygon areas, chosen to represent varying environmental conditions. The algorithm works comparatively well in freezing temperatures and strong wind conditions, prevailing in the Amundsen Sea during the year. The detection rate was 96% which corresponds to 94% of the area (counting icebergs larger than 0.03 km**2), for all seasons. The presented algorithm tends to generate errors in the form of false alarms, mainly caused by the presence of ice floes, rather than misses. This affects the reliability since false alarms were manually corrected post analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Weddell Gyre plays a crucial role in the regulation of climate by transferring heat into the deep ocean through deep and bottom water mass formation. However, our understanding of Weddell Gyre water mass properties is limited to regions of data availability, primarily along the Prime Meridian. The aim is to provide a dataset of the upper water column properties of the entire Weddell Gyre. Objective mapping was applied to Argo float data in order to produce spatially gridded, time composite maps of temperature and salinity for fixed pressure levels ranging from 50 to 2000 dbar, as well as temperature, salinity and pressure at the level of the sub-surface temperature maximum. While the data are currently too limited to incorporate time into the gridded structure, the data are extensive enough to produce maps of the entire region across three time composite periods (2002-2005, 2006-2009 and 2010-2013), which can be used to determine how representative conclusions drawn from data collected along general RV transect lines are on a gyre scale perspective. The time composite data sets are provided as netCDF files; one for each time period. Mapped fields of conservative temperature, absolute salinity and potential density are provided for 41 vertical pressure levels. The above variables as well as pressure are provided at the level of the sub-surface temperature maximum. Corresponding mapping errors are also included in the netCDF files. Further details are provided in the global attributes, such as the unit variables and structure of the corresponding data array (i.e. latitude x longitude x vertical pressure level). In addition, all files ending in "_potTpSal" provide mapped fields of potential temperature and practical salinity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies on the impact of historical, current and future global change require very high-resolution climate data (less or equal 1km) as a basis for modelled responses, meaning that data from digital climate models generally require substantial rescaling. Another shortcoming of available datasets on past climate is that the effects of sea level rise and fall are not considered. Without such information, the study of glacial refugia or early Holocene plant and animal migration are incomplete if not impossible. Sea level at the last glacial maximum (LGM) was approximately 125m lower, creating substantial additional terrestrial area for which no current baseline data exist. Here, we introduce the development of a novel, gridded climate dataset for LGM that is both very high resolution (1km) and extends to the LGM sea and land mask. We developed two methods to extend current terrestrial precipitation and temperature data to areas between the current and LGM coastlines. The absolute interpolation error is less than 1°C and 0.5 °C for 98.9% and 87.8% of all pixels for the first two 1 arc degree distance zones. We use the change factor method with these newly assembled baseline data to downscale five global circulation models of LGM climate to a resolution of 1km for Europe. As additional variables we calculate 19 'bioclimatic' variables, which are often used in climate change impact studies on biological diversity. The new LGM climate maps are well suited for analysing refugia and migration during Holocene warming following the LGM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The geometries of a catchment constitute the basis for distributed physically based numerical modeling of different geoscientific disciplines. In this paper results from ground-penetrating radar (GPR) measurements, in terms of a 3D model of total sediment thickness and active layer thickness in a periglacial catchment in western Greenland, is presented. Using the topography, thickness and distribution of sediments is calculated. Vegetation classification and GPR measurements are used to scale active layer thickness from local measurements to catchment scale models. Annual maximum active layer thickness varies from 0.3 m in wetlands to 2.0 m in barren areas and areas of exposed bedrock. Maximum sediment thickness is estimated to be 12.3 m in the major valleys of the catchment. A method to correlate surface vegetation with active layer thickness is also presented. By using relatively simple methods, such as probing and vegetation classification, it is possible to upscale local point measurements to catchment scale models, in areas where the upper subsurface is relatively homogenous. The resulting spatial model of active layer thickness can be used in combination with the sediment model as a geometrical input to further studies of subsurface mass-transport and hydrological flow paths in the periglacial catchment through numerical modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of the carbon dioxide (CO2) fluxes above the open ocean plays an important role for the determination of the global carbon cycle. A frequently used method therefore is the eddy-covariance technique, which is based on the theory of the Prandl-layer with height-constant fluxes in the atmospheric boundary layer. To test the assumption of the constant flux layer, in 2008 measurements of turbulent heat and CO2 fluxes were started within the project Surface Ocean Processes in the Anthropocene (SOPRAN) at the research platform FINO2. The FINO2 platform is situated in the South-west of the Baltic Sea, in the tri-border region between Germany, Denmark, and Sweden. In the frame of the Research project SOPRAN, the platform was equipped with additional sensors in June 2008. A combination of 3-component sonic anemometers (USA-1) and open-path infrared gas analyzers for absolute humidity (H2O) and CO2 (LICOR 7500) were installed at a 9m long boom directed southward of the platform in two heights, at 6.8 and 13.8m above sea surface. Additionally slow temperature and humidity sensors were installed at each height. The gas analyzer systems were calibrated before the installation and worked permanently without any calibration during the first measurement period of one and a half years. The comparison with the measurements of the slow sensors showed for both instruments no significant long-term drift in H2O and CO2. Drifts on smaller time scales (in the order of days) due to the contamination with sea salt, were cleaned naturally by rain. The drift of both quantities had no influence on the fluctuation, which, in contrast to the mean values, are important for the flux estimation. All data were filtered due to spikes, rain, and the influence of the mast. The data set includes the measurements of all sensors as average over 30 minutes each for one and a half years, June 2008 to December 2009, and 10 month from November 2011 to August 2012. Additionally derived quantities for 30 minutes intervals each, like the variances for the fast-sensor variables, as well as the momentum, sensible and latent heat, and CO2 flux are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research work as presented in this article covers the design of detached breakwaters since they constitute a type of coastal defence work with which to combat many of the erosion problems found on beaches in a stable, sustainable fashion. The main aim of this work is to formulate a functional and environmental (but not structural) design method, enabling the fundamental characteristics of a detached breakwater to be defined as a function of the effect it is wished to induce on the coast, and taking into account variables of a different nature (climate, geomorphology and geometry) influencing the changes the shoreline undergoes after its construction. With this article, it is intended to submit the final result of the investigation undertaken, applying the detached breakwater design method as developed to solving a practical case. Thus it may be shown how the method enables a detached breakwater’s geometric pre-sizing to be tackled at a place on the coast with certain climate, geomorphology and littoral dynamic characteristics, first setting the final state of equilibrium it is wanted to obtain therein after its construction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies feature subset selection in classification using a multiobjective estimation of distribution algorithm. We consider six functions, namely area under ROC curve, sensitivity, specificity, precision, F1 measure and Brier score, for evaluation of feature subsets and as the objectives of the problem. One of the characteristics of these objective functions is the existence of noise in their values that should be appropriately handled during optimization. Our proposed algorithm consists of two major techniques which are specially designed for the feature subset selection problem. The first one is a solution ranking method based on interval values to handle the noise in the objectives of this problem. The second one is a model estimation method for learning a joint probabilistic model of objectives and variables which is used to generate new solutions and advance through the search space. To simplify model estimation, l1 regularized regression is used to select a subset of problem variables before model learning. The proposed algorithm is compared with a well-known ranking method for interval-valued objectives and a standard multiobjective genetic algorithm. Particularly, the effects of the two new techniques are experimentally investigated. The experimental results show that the proposed algorithm is able to obtain comparable or better performance on the tested datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service compositions put together loosely-coupled component services to perform more complex, higher level, or cross-organizational tasks in a platform-independent manner. Quality-of-Service (QoS) properties, such as execution time, availability, or cost, are critical for their usability, and permissible boundaries for their values are defined in Service Level Agreements (SLAs). We propose a method whereby constraints that model SLA conformance and violation are derived at any given point of the execution of a service composition. These constraints are generated using the structure of the composition and properties of the component services, which can be either known or empirically measured. Violation of these constraints means that the corresponding scenario is unfeasible, while satisfaction gives values for the constrained variables (start / end times for activities, or number of loop iterations) which make the scenario possible. These results can be used to perform optimized service matching or trigger preventive adaptation or healing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article research into the uniaxial tensile strength of Al2O3 monolithic ceramic is presented. The experimental procedure of the spalling of long bars is investigated from different approaches. This method is used to obtain the tensile strength at high strain rates under uniaxial conditions. Different methodologies proposed by several authors are used to obtain the tensile strength. The hypotheses needed for the experimental set-up are also checked, and the requirements of the set-up and the variables are also studied by means of numerical simulations. The research shows that the shape of the projectile is crucial to achieve successfully tests results. An experimental campaign has been carried out including high speed video and a digital image correlation system to obtain the tensile strength of alumina. Finally, a comparison of the test results provided by three different methods proposed by different authors is presented. The tensile strength obtained from the three such methods on the same specimens provides contrasting results. Mean values vary from one method to another but the trends are similar for two of the methods. The third method gives less scatter, though the mean values obtained are lower and do not follow the same trend as the other methods for the different specimens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a method based mainly on Data Fusion and Artificial Neural Networks to classify one of the most important pollutants such as Particulate Matter less than 10 micrometer in diameter (PM10) concentrations is proposed. The main objective is to classify in two pollution levels (Non-Contingency and Contingency) the pollutant concentration. Pollutant concentrations and meteorological variables have been considered in order to build a Representative Vector (RV) of pollution. RV is used to train an Artificial Neural Network in order to classify pollutant events determined by meteorological variables. In the experiments, real time series gathered from the Automatic Environmental Monitoring Network (AEMN) in Salamanca Guanajuato Mexico have been used. The method can help to establish a better air quality monitoring methodology that is essential for assessing the effectiveness of imposed pollution controls, strategies, and facilitate the pollutants reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Analysis of exhaled volatile organic compounds (VOCs) in breath is an emerging approach for cancer diagnosis, but little is known about its potential use as a biomarker for colorectal cancer (CRC). We investigated whether a combination of VOCs could distinct CRC patients from healthy volunteers. Methods: In a pilot study, we prospectively analyzed breath exhalations of 38 CRC patient and 43 healthy controls all scheduled for colonoscopy, older than 50 in the average-risk category. The samples were ionized and analyzed using a Secondary ElectroSpray Ionization (SESI) coupled with a Time-of-Flight Mass Spectrometer (SESI-MS). After a minimum of 2 hours fasting, volunteers deeply exhaled into the system. Each test requires three soft exhalations and takes less than ten minutes. No breath condensate or collection are required and VOCs masses are detected in real time, also allowing for a spirometric profile to be analyzed along with the VOCs. A new sampling system precludes ambient air from entering the system, so background contamination is reduced by an overall factor of ten. Potential confounding variables from the patient or the environment that could interfere with results were analyzed. Results: 255 VOCs, with masses ranging from 30 to 431 Dalton have been identified in the exhaled breath. Using a classification technique based on the ROC curve for each VOC, a set of 9 biomarkers discriminating the presence of CRC from healthy volunteers was obtained, showing an average recognition rate of 81.94%, a sensitivity of 87.04% and specificity of 76.85%. Conclusions: A combination of cualitative and cuantitative analysis of VOCs in the exhaled breath could be a powerful diagnostic tool for average-risk CRC population. These results should be taken with precaution, as many endogenous or exogenous contaminants could interfere as confounding variables. On-line analysis with SESI-MS is less time-consuming and doesn’t need sample preparation. We are recruiting in a new pilot study including breath cleaning procedures and spirometric analysis incorporated into the postprocessing algorithms, to better control for confounding variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At present, photovoltaic energy is one of the most important renewable energy sources. The demand for solar panels has been continuously growing, both in the industrial electric sector and in the private sector. In both cases the analysis of the solar panel efficiency is extremely important in order to maximize the energy production. In order to have a more efficient photovoltaic system, the most accurate understanding of this system is required. However, in most of the cases the only information available in this matter is reduced, the experimental testing of the photovoltaic device being out of consideration, normally for budget reasons. Several methods, normally based on an equivalent circuit model, have been developed to extract the I-V curve of a photovoltaic device from the small amount of data provided by the manufacturer. The aim of this paper is to present a fast, easy, and accurate analytical method, developed to calculate the equivalent circuit parameters of a solar panel from the only data that manufacturers usually provide. The calculated circuit accurately reproduces the solar panel behavior, that is, the I-V curve. This fact being extremely important for practical reasons such as selecting the best solar panel in the market for a particular purpose, or maximize the energy extraction with MPPT (Maximum Peak Power Tracking) methods.