861 resultados para multiple data sources


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Determining the variability of carbon dioxide emission from soils is an important task as soils are among the largest sources of carbon in biosphere. In this work the temporal variability of bare soil CO2 emissions was measured over a 3-week period. Temporal changes in soil CO2 emission were modelled in terms of the changes that occurred in solar radiation (SR), air temperature (T-air), air humidity (AR), evaporation (EVAP) and atmospheric pressure (ATM) registered during the time period that the experiment was conducted. The multiple regression analysis (backward elimination procedure) includes almost all the meteorological variables and their interactions into the final model (R-2 = 0.98), but solar radiation showed to be the one of the most relevant variables. The present study indicates that meteorological data could be taken into account as the main forces driving the temporal variability of carbon dioxide emission from bare soils, where microbial activity is the sole source of carbon dioxide emitted. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an intelligent search strategy for the conforming bad data errors identification in the generalized power system state estimation, by using the tabu search meta heuristic. The main objective is to detect critical errors involving both analog and topology errors. These errors are represented by conforming errors, whose nature affects measurements that actually do not present bad data and also the conventional bad data identification strategies based on the normalized residual methods. ©2005 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The use of markers distributed all long the genome may increase the accuracy of the predicted additive genetic value of young animals that are candidates to be selected as reproducers. In commercial herds, due to the cost of genotyping, only some animals are genotyped and procedures, divided in two or three steps, are done in order to include these genomic data in genetic evaluation. However, genomic evaluation may be calculated using one unified step that combines phenotypic data, pedigree and genomics. The aim of the study was to compare a multiple-trait model using only pedigree information with another using pedigree and genomic data. In this study, 9,318 lactations from 3061 buffaloes were used, 384 buffaloes were genotyped using a Illumina bovine chip (Illumina Infinium (R) bovineHD BeadChip). Seven traits were analyzed milk yield (MY), fat yield (FY), protein yield (PY), lactose yield (LY), fat percentage (F%), protein percentage (P%) and somatic cell score (SCSt). Two analyses were done: one using phenotypic and pedigree information (matrix A) and in the other using a matrix based in pedigree and genomic information (one step, matrix H). The (co) variance components were estimated using multiple-trait analysis by Bayesian inference method, applying an animal model, through Gibbs sampling. The model included the fixed effects of contemporary groups (herd-year-calving season), number of milking (2 levels), and age of buffalo at calving as (co) variable (quadratic and linear effect). The additive genetic, permanent environmental, and residual effects were included as random effects in the model. The heritability estimates using matrix A were 0.25, 0.22, 0.26, 0.17, 0.37, 0.42 and 0.26 and using matrix H were 0.25, 0.24, 0.26, 0.18, 0.38, 0.46 and 0.26 for MY, FY, PY, LY, % F, % P and SCCt, respectively. The estimates of the additive genetic effect for the traits were similar in both analyses, but the accuracy were bigger using matrix H (superior to 15% for traits studied). The heritability estimates were moderated indicating genetic gain under selection. The use of genomic information in the analyses increases the accuracy. It permits a better estimation of the additive genetic value of the animals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sugarcane-breeding programs take at least 12 years to develop new commercial cultivars. Molecular markers offer a possibility to study the genetic architecture of quantitative traits in sugarcane, and they may be used in marker-assisted selection to speed up artificial selection. Although the performance of sugarcane progenies in breeding programs are commonly evaluated across a range of locations and harvest years, many of the QTL detection methods ignore two- and three-way interactions between QTL, harvest, and location. In this work, a strategy for QTL detection in multi-harvest-location trial data, based on interval mapping and mixed models, is proposed and applied to map QTL effects on a segregating progeny from a biparental cross of pre-commercial Brazilian cultivars, evaluated at two locations and three consecutive harvest years for cane yield (tonnes per hectare), sugar yield (tonnes per hectare), fiber percent, and sucrose content. In the mixed model, we have included appropriate (co)variance structures for modeling heterogeneity and correlation of genetic effects and non-genetic residual effects. Forty-six QTLs were found: 13 QTLs for cane yield, 14 for sugar yield, 11 for fiber percent, and 8 for sucrose content. In addition, QTL by harvest, QTL by location, and QTL by harvest by location interaction effects were significant for all evaluated traits (30 QTLs showed some interaction, and 16 none). Our results contribute to a better understanding of the genetic architecture of complex traits related to biomass production and sucrose content in sugarcane.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present an analysis of observations made with the Arcminute Microkelvin Imager (AMI) and the CanadaFranceHawaii Telescope (CFHT) of six galaxy clusters in a redshift range of 0.160.41. The cluster gas is modelled using the SunyaevZeldovich (SZ) data provided by AMI, while the total mass is modelled using the lensing data from the CFHT. In this paper, we (i) find very good agreement between SZ measurements (assuming large-scale virialization and a gas-fraction prior) and lensing measurements of the total cluster masses out to r200; (ii) perform the first multiple-component weak-lensing analysis of A115; (iii) confirm the unusual separation between the gas and mass components in A1914 and (iv) jointly analyse the SZ and lensing data for the relaxed cluster A611, confirming our use of a simulation-derived masstemperature relation for parametrizing measurements of the SZ effect.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Context. The ESO public survey VISTA variables in the Via Lactea (VVV) started in 2010. VVV targets 562 sq. deg in the Galactic bulge and an adjacent plane region and is expected to run for about five years. Aims. We describe the progress of the survey observations in the first observing season, the observing strategy, and quality of the data obtained. Methods. The observations are carried out on the 4-m VISTA telescope in the ZYJHK(s) filters. In addition to the multi-band imaging the variability monitoring campaign in the K-s filter has started. Data reduction is carried out using the pipeline at the Cambridge Astronomical Survey Unit. The photometric and astrometric calibration is performed via the numerous 2MASS sources observed in each pointing. Results. The first data release contains the aperture photometry and astrometric catalogues for 348 individual pointings in the ZYJHK(s) filters taken in the 2010 observing season. The typical image quality is similar to 0 ''.9-1 ''.0. The stringent photometric and image quality requirements of the survey are satisfied in 100% of the JHK(s) images in the disk area and 90% of the JHK(s) images in the bulge area. The completeness in the Z and Y images is 84% in the disk, and 40% in the bulge. The first season catalogues contain 1.28 x 10(8) stellar sources in the bulge and 1.68 x 10(8) in the disk area detected in at least one of the photometric bands. The combined, multi-band catalogues contain more than 1.63 x 10(8) stellar sources. About 10% of these are double detections because of overlapping adjacent pointings. These overlapping multiple detections are used to characterise the quality of the data. The images in the JHK(s) bands extend typically similar to 4 mag deeper than 2MASS. The magnitude limit and photometric quality depend strongly on crowding in the inner Galactic regions. The astrometry for K-s = 15-18 mag has rms similar to 35-175 mas. Conclusions. The VVV Survey data products offer a unique dataset to map the stellar populations in the Galactic bulge and the adjacent plane and provide an exciting new tool for the study of the structure, content, and star-formation history of our Galaxy, as well as for investigations of the newly discovered star clusters, star-forming regions in the disk, high proper motion stars, asteroids, planetary nebulae, and other interesting objects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we focus on the model for two types of tumors. Tumor development can be described by four types of death rates and four tumor transition rates. We present a general semi-parametric model to estimate the tumor transition rates based on data from survival/sacrifice experiments. In the model, we make a proportional assumption of tumor transition rates on a common parametric function but no assumption of the death rates from any states. We derived the likelihood function of the data observed in such an experiment, and an EM algorithm that simplified estimating procedures. This article extends work on semi-parametric models for one type of tumor (see Portier and Dinse and Dinse) to two types of tumors.