3 resultados para LHC,CMS,Big Data
em Scielo Saúde Pública - SP
Resumo:
In this study, we concentrate on modelling gross primary productivity using two simple approaches to simulate canopy photosynthesis: "big leaf" and "sun/shade" models. Two approaches for calibration are used: scaling up of canopy photosynthetic parameters from the leaf to the canopy level and fitting canopy biochemistry to eddy covariance fluxes. Validation of the models is achieved by using eddy covariance data from the LBA site C14. Comparing the performance of both models we conclude that numerically (in terms of goodness of fit) and qualitatively, (in terms of residual response to different environmental variables) sun/shade does a better job. Compared to the sun/shade model, the big leaf model shows a lower goodness of fit and fails to respond to variations in the diffuse fraction, also having skewed responses to temperature and VPD. The separate treatment of sun and shade leaves in combination with the separation of the incoming light into direct beam and diffuse make sun/shade a strong modelling tool that catches more of the observed variability in canopy fluxes as measured by eddy covariance. In conclusion, the sun/shade approach is a relatively simple and effective tool for modelling photosynthetic carbon uptake that could be easily included in many terrestrial carbon models.
Resumo:
The constant scientific production in the universities and in the research centers makes these organizations produce and acquire a great amount of data in a short period of time. Due to the big quantity of data, the research organizations become potentially vulnerable to the impacts on information booms that may cause a chaos as far as information management is concerned. In this context, the development of data catalogues comes up as one possible solution to the problems such as (I) the organization and (II) the data management. In the scientific scope, the data catalogues are implemented with the standard for digital and geospatial metadata and are broadly utilized in the process of producing a catalogue of scientific information. The aim of this work is to present the characteristics of access and storage of metadata in databank systems in order to improve the description and dissemination of scientific data. Relevant aspects will be considered and they should be analyzed during the stage of planning, once they can determine the success of implementation. The use of data catalogues by research organizations may be a way to promote and facilitate the dissemination of scientific data, avoid the repetition of efforts while being executed, as well as incentivate the use of collected, processed an also stored.
Resumo:
Data of corn ear production (kg/ha) of 196 half-sib progenies (HSP) of the maize population CMS-39 obtained from experiments carried out in four environments were used to adapt and assess the BLP method (best linear predictor) in comparison with to the selection among and within half-sib progenies (SAWHSP). The 196 HSP of the CMS-39 population developed by the National Center for Maize and Sorghum Research (CNPMS-EMBRAPA) were related through their pedigree with the recombined progenies of the previous selection cycle. The two methodologies used for the selection of the twenty best half-sib progenies, BLP and SAWHSP, led to similar expected genetic gains. There was a tendency in the BLP methodology to select a greater number of related progenies because of the previous generation (pedigree) than the other method. This implies that greater care with the effective size of the population must be taken with this method. The SAWHSP methodology was efficient in isolating the additive genetic variance component from the phenotypic component. The pedigree system, although unnecessary for the routine use of the SAWHSP methodology, allowed the prediction of an increase in the inbreeding of the population in the long term SAWHSP selection when recombination is simultaneous to creation of new progenies.