982 resultados para estimate
Resumo:
The assessment of the glacier thickness is one of the most widespread applications of radioglaciology, and is the basis for estimating the glacier volume. The accuracy of the measurement of ice thickness, the distribution of profiles over the glacier and the accuracy of the boundary delineation of the glacier are the most important factors determining the error in the evaluation of the glacier volume. The aim of this study is to get an accurate estimate of the error incurred in the estimate of glacier volume from GPR-retrieved ice-thickness data.
Resumo:
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm that is applied to the estimation of modal parameters from system input and output data. The effectiveness of this structural identification method is evaluated through numerical simulation. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the simulated structure are estimated applying the proposed identification method to a set of 100 simulated cases. The numerical results show that the proposed method estimates the modal parameters with precision in the presence of 20% measurement noise even. Finally, advantages and disadvantages of the method have been discussed.
Resumo:
The simulation of design basis accidents in a containment building is usually conducted with a lumped parameter model. The codes normally used by Westinghouse Electric Company (WEC) for that license analysis are WGOTHIC or COCO, which are suitable to provide an adequate estimation of the overall peak temperature and pressure of the containment. However, for the detailed study of the thermal-hydraulic behavior in every room and compartment of the containment building, it could be more convenient to model the containment with a more detailed 3D representation of the geometry of the whole building. The main objective of this project is to obtain a standard PWR Westinghouse as well as an AP1000® containment model for a CFD code to analyze the thermal-hydraulic detailed behavior during a design basis accident. In this paper the development and testing of both containment models is presented.
Resumo:
The objective of the project is the evaluation of the code STAR-CCM +, as well as the establishment of guidelines and standardized procedures for the discretization of the area of study and the selection of physical models suitable for the simulation of BWR fuel. For this purpose several of BFBT experiments have simulated provide a data base for the development of experiments for measuring distribution of fractions of holes to changes in power in order to find the most appropriate models for the simulation of the problem.
Resumo:
Adjusting N fertilizer application to crop requirements is a key issue to improve fertilizer efficiency, reducing unnecessary input costs to farmers and N environmental impact. Among the multiple soil and crop tests developed, optical sensors that detect crop N nutritional status may have a large potential to adjust N fertilizer recommendation (Samborski et al. 2009). Optical readings are rapid to take and non-destructive, they can be efficiently processed and combined to obtain indexes or indicators of crop status. However, other physiological stress conditions may interfere with the readings and detection of the best crop nutritional status indicators is not always and easy task. Comparison of different equipments and technologies might help to identify strengths and weakness of the application of optical sensors for N fertilizer recommendation. The aim of this study was to evaluate the potential of various ground-level optical sensors and narrow-band indices obtained from airborne hyperspectral images as tools for maize N fertilizer recommendations. Specific objectives were i) to determine which indices could detect differences in maize plants treated with different N fertilizer rates, and ii) to evaluate its ability to identify N-responsive from non-responsive sites.
Resumo:
The Iberian pig valued natural resources of the pasture when fattened in mountain. The variability of acorn production is not contained in any line of Spanish agricultural insurance. However, the production of arable pasture is covered by line insurance number 133 for loss of pasture compensation. This scenario is only contemplated for breeding cows and brave bulls, sheep, goats and horses, although pigs are not included. This insurance is established by monitoring ten-day composites Normalized Difference Vegetation Index (NDVI) measured by satellite over treeless pastures, using MODIS TERRA satellite. The aim of this work is to check if we can use a satellite vegetation index to estimate the production of acorns.
Resumo:
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the “with measures” scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. Implications: A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
Resumo:
We present a set of new volume scaling relationships specific to Svalbard glaciers, derived from a sample of 60 volume–area pairs. Glacier volumes are computed from ground-penetrating radar (GPR)-retrieved ice thickness measurements, which have been compiled from different sources for this study. The most precise scaling models, in terms of lowest cross-validation errors, are obtained using a multivariate approach where, in addition to glacier area, glacier length and elevation range are also used as predictors. Using this multivariate scaling approach, together with the Randolph Glacier Inventory V3.2 for Svalbard and Jan Mayen, we obtain a regional volume estimate of 6700 ± 835 km3, or 17 ± 2 mm of sea-level equivalent (SLE). This result lies in the mid- to low range of recently published estimates, which show values as varied as 13 and 24 mm SLE. We assess the sensitivity of the scaling exponents to glacier characteristics such as size, aspect ratio and average slope, and find that the volume of steep-slope and cirque-type glaciers is not very sensitive to changes in glacier area.
Resumo:
In this paper we propose a method to estimate by maximum likelihood the divergence time between two populations, specifically designed for the analysis of nonrecurrent rare mutations. Given the rapidly growing amount of data, rare disease mutations affecting humans seem the most suitable candidates for this method. The estimator RD, and its conditional version RDc, were derived, assuming that the population dynamics of rare alleles can be described by using a birth–death process approximation and that each mutation arose before the split of a common ancestral population into the two diverging populations. The RD estimator seems more suitable for large sample sizes and few alleles, whose age can be approximated, whereas the RDc estimator appears preferable when this is not the case. When applied to three cystic fibrosis mutations, the estimator RD could not exclude a very recent time of divergence among three Mediterranean populations. On the other hand, the divergence time between these populations and the Danish population was estimated to be, on the average, 4,500 or 15,000 years, assuming or not a selective advantage for cystic fibrosis carriers, respectively. Confidence intervals are large, however, and can probably be reduced only by analyzing more alleles or loci.
Resumo:
Approximately 250,000 measurements made for the pCO2 difference between surface water and the marine atmosphere, ΔpCO2, have been assembled for the global oceans. Observations made in the equatorial Pacific during El Nino events have been excluded from the data set. These observations are mapped on the global 4° × 5° grid for a single virtual calendar year (chosen arbitrarily to be 1990) representing a non-El Nino year. Monthly global distributions of ΔpCO2 have been constructed using an interpolation method based on a lateral advection–diffusion transport equation. The net flux of CO2 across the sea surface has been computed using ΔpCO2 distributions and CO2 gas transfer coefficients across sea surface. The annual net uptake flux of CO2 by the global oceans thus estimated ranges from 0.60 to 1.34 Gt-C⋅yr−1 depending on different formulations used for wind speed dependence on the gas transfer coefficient. These estimates are subject to an error of up to 75% resulting from the numerical interpolation method used to estimate the distribution of ΔpCO2 over the global oceans. Temperate and polar oceans of the both hemispheres are the major sinks for atmospheric CO2, whereas the equatorial oceans are the major sources for CO2. The Atlantic Ocean is the most important CO2 sink, providing about 60% of the global ocean uptake, while the Pacific Ocean is neutral because of its equatorial source flux being balanced by the sink flux of the temperate oceans. The Indian and Southern Oceans take up about 20% each.
Resumo:
Hepatitis C virus (HCV) is a major cause of chronic hepatitis. The virus does not replicate efficiently in cell cultures, and it is therefore difficult to assess infection-neutralizing antibodies and to evaluate protective immunity in vitro. To study the binding of the HCV envelope to cell-surface receptors, we developed an assay to assess specific binding of recombinant envelope proteins to human cells and neutralization thereof. HCV recombinant envelope proteins expressed in various systems were incubated with human cells, and binding was assessed by flow cytometry using anti-envelope antibodies. Envelope glycoprotein 2 (E2) expressed in mammalian cells, but not in yeast or insect cells, binds human cells with high affinity (Kd approximately 10(-8) M). We then assessed antibodies able to neutralize E2 binding in the sera of both vaccinated and carrier chimpanzees, as well as in the sera of humans infected with various HCV genotypes. Vaccination with recombinant envelope proteins expressed in mammalian cells elicited high titers of neutralizing antibodies that correlated with protection from HCV challenge. HCV infection does not elicit neutralizing antibodies in most chimpanzees and humans, although low titers of neutralizing antibodies were detectable in a minority of infections. The ability to neutralize binding of E2 derived from the HCV-1 genotype was equally distributed among sera from patients infected with HCV genotypes 1, 2, and 3, demonstrating that binding of E2 is partly independent of E2 hypervariable regions. However, a mouse monoclonal antibody raised against the E2 hypervariable region 1 can partially neutralize binding of E2, indicating that at least two neutralizing epitopes, one of which is hypervariable, should exist on the E2 protein. The neutralization-of-binding assay described will be useful to study protective immunity to HCV infection and for vaccine development.
Resumo:
We prove global existence of nonnegative solutions to the one dimensional degenerate parabolic problems containing a singular term. We also show the global quenching phenomena for L1 initial datums. Moreover, the free boundary problem is considered in this paper.
Resumo:
Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.
Resumo:
Background and objective: In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods: We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results: Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions: According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.