842 resultados para data movement problem
Resumo:
Adolescent pregnancy is a current problem which raises concern due to its individual, familiar and collective consequences. Fifteen million adolescents give birth each year in the world. Abortion is the preferred option used in unwanted pregnancies. Adolescent pregnancy is frequent in Nocaima, Cundinamarca and is a community concern in this small town initiating its process of becoming a healthy municipality. As such, the community has highlighted this problem to be studied and submitted to intervention to promote a free and responsible sexuality decreasing unwanted adolescent pregnancies. Objective: To find data on contraception, pregnancy and related factors in selected adolescents therefore, improving current incomplete information. Methods: Descriptive observational study with survey application on 226 female 14 to 19 year old students from three high school facilities in Nocaima including 8th to 11th graders. Results: 88.9% of the participants were between 14 and 17 years of age. 66.8% of the adolescents claim to use correctly contraceptive methods and 28.8% have had sexual intercourse with an average initiation at age 15. 11.1% have been pregnant once in their lives and of these 57.1 % ended in induced abortion and 66.8% were school dropouts. Conclusions: After having implemented an educational campaign on healthy sex and reproductive behaviors we view adolescent pregnancy as a public health problem which is preventable and related to the deficit of social and family support as well as weakness in individual decision making.
Resumo:
La medición de la desigualdad de oportunidades con las bases de PISA implican varias limitaciones: (i) la muestra sólo representa una fracción limitada de las cohortes de jóvenes de 15 años en los países en desarrollo y (ii) estas fracciones no son uniformes entre países ni entre periodos. Lo anterior genera dudas sobre la confiabilidad de estas mediciones cuando se usan para comparaciones internacionales: mayor equidad puede ser resultado de una muestra más restringida y más homogénea. A diferencia de enfoques previos basados en reconstrucción de las muestras, el enfoque del documento consiste en proveer un índice bidimensional que incluye logro y acceso como dimensiones del índice. Se utilizan varios métodos de agregación y se observan cambios considerables en los rankings de (in) equidad de oportunidades cuando solo se observa el logro y cuando se observan ambas dimensiones en las pruebas de PISA 2006/2009. Finalmente se propone una generalización del enfoque permitiendo otras dimensiones adicionales y otros pesos utilizados en la agregación.
Resumo:
The formulation of four-dimensional variational data assimilation allows the incorporation of constraints into the cost function which need only be weakly satisfied. In this paper we investigate the value of imposing conservation properties as weak constraints. Using the example of the two-body problem of celestial mechanics we compare weak constraints based on conservation laws with a constraint on the background state.We show how the imposition of conservation-based weak constraints changes the nature of the gradient equation. Assimilation experiments demonstrate how this can add extra information to the assimilation process, even when the underlying numerical model is conserving.
Resumo:
In this paper we consider the 2D Dirichlet boundary value problem for Laplace’s equation in a non-locally perturbed half-plane, with data in the space of bounded and continuous functions. We show uniqueness of solution, using standard Phragmen-Lindelof arguments. The main result is to propose a boundary integral equation formulation, to prove equivalence with the boundary value problem, and to show that the integral equation is well posed by applying a recent partial generalisation of the Fredholm alternative in Arens et al [J. Int. Equ. Appl. 15 (2003) pp. 1-35]. This then leads to an existence proof for the boundary value problem. Keywords. Boundary integral equation method, Water waves, Laplace’s
Resumo:
The s–x model of microwave emission from soil and vegetation layers is widely used to estimate soil moisture content from passive microwave observations. Its application to prospective satellite-based observations aggregating several thousand square kilometres requires understanding of the effects of scene heterogeneity. The effects of heterogeneity in soil surface roughness, soil moisture, water area and vegetation density on the retrieval of soil moisture from simulated single- and multi-angle observing systems were tested. Uncertainty in water area proved the most serious problem for both systems, causing errors of a few percent in soil moisture retrieval. Single-angle retrieval was largely unaffected by the other factors studied here. Multiple-angle retrievals errors around one percent arose from heterogeneity in either soil roughness or soil moisture. Errors of a few percent were caused by vegetation heterogeneity. A simple extension of the model vegetation representation was shown to reduce this error substantially for scenes containing a range of vegetation types.
Resumo:
This study examines the efficacy of published δ18O data from the calcite of Late Miocene surface dwelling planktonic foraminifer shells, for sea surface temperature estimates for the pre-Quaternary. The data are from 33 Late Miocene (Messinian) marine sites from a modern latitudinal gradient of 64°N to 48°S. They give estimates of SSTs in the tropics/subtropics (to 30°N and S) that are mostly cooler than present. Possible causes of this temperature discrepancy are ecological factors (e.g. calcification of shells at levels below the ocean mixed layer), taphonomic effects (e.g. diagenesis or dissolution), inaccurate estimation of Late Miocene seawater oxygen isotope composition, or a real Late Miocene cool climate. The scale of apparent cooling in the tropics suggests that the SST signal of the foraminifer calcite has been reset, at least in part, by early diagenetic calcite with higher δ18O, formed in the foraminifer shells in cool sea bottom pore waters, probably coupled with the effects of calcite formed below the mixed layer during the life of the foraminifera. This hypothesis is supported by the markedly cooler SST estimates from low latitudes—in some cases more than 9 °C cooler than present—where the gradients of temperature and the δ18O composition of seawater between sea surface and sea bottom are most marked, and where ocean surface stratification is high. At higher latitudes, particularly N and S of 30°, the temperature signal is still cooler, though maximum temperature estimates overlap with modern SSTs N and S of 40°. Comparison of SST estimates for the Late Miocene from alkenone unsaturation analysis from the eastern tropical Atlantic at Ocean Drilling Program (ODP) Site 958—which suggest a warmer sea surface by 2–4 °C, with estimates from oxygen isotopes at Deep Sea Drilling Project (DSDP) Site 366 and ODP Site 959, indicating cooler than present SSTs, also suggest a significant impact on the δ18O signal. Nevertheless, much of the original SST variation is clearly preserved in the primary calcite formed in the mixed layer, and records secular and temporal oceanographic changes at the sea surface, such as movement of the Antarctic Polar Front in the Southern Ocean. Cooler SSTs in the tropics and sub-tropics are also consistent with the Late Miocene latitude reduction in the coral reef belt and with interrupted reef growth on the Queensland Plateau of eastern Australia, though it is not possible to quantify absolute SSTs with the existing oxygen isotope data. Reconstruction of an accurate global SST dataset for Neogene time-slices from the existing published DSDP/ODP isotope data, for use in general circulation models, may require a detailed re-assessment of taphonomy at many sites.
Resumo:
Data such as digitized aerial photographs, electrical conductivity and yield are intensive and relatively inexpensive to obtain compared with collecting soil data by sampling. If such ancillary data are co-regionalized with the soil data they should be suitable for co-kriging. The latter requires that information for both variables is co-located at several locations; this is rarely so for soil and ancillary data. To solve this problem, we have derived values for the ancillary variable at the soil sampling locations by averaging the values within a radius of 15 m, taking the nearest-neighbour value, kriging over 5 m blocks, and punctual kriging. The cross-variograms from these data with clay content and also the pseudo cross-variogram were used to co-krige to validation points and the root mean squared errors (RMSEs) were calculated. In general, the data averaged within 15m and the punctually kriged values resulted in more accurate predictions.
Resumo:
Maps of kriged soil properties for precision agriculture are often based on a variogram estimated from too few data because the costs of sampling and analysis are often prohibitive. If the variogram has been computed by the usual method of moments, it is likely to be unstable when there are fewer than 100 data. The scale of variation in soil properties should be investigated prior to sampling by computing a variogram from ancillary data, such as an aerial photograph of the bare soil. If the sampling interval suggested by this is large in relation to the size of the field there will be too few data to estimate a reliable variogram for kriging. Standardized variograms from aerial photographs can be used with standardized soil data that are sparse, provided the data are spatially structured and the nugget:sill ratio is similar to that of a reliable variogram of the property. The problem remains of how to set this ratio in the absence of an accurate variogram. Several methods of estimating the nugget:sill ratio for selected soil properties are proposed and evaluated. Standardized variograms with nugget:sill ratios set by these methods are more similar to those computed from intensive soil data than are variograms computed from sparse soil data. The results of cross-validation and mapping show that the standardized variograms provide more accurate estimates, and preserve the main patterns of variation better than those computed from sparse data.
Resumo:
Indicators are commonly recommended as tools for assessing the attainment of development, and the current vogue is for aggregating a number of indicators together into a single index. It is claimed that such indices of development help facilitate maximum impact in policy terms by appealing to those who may not necessarily have technical expertise in data collection, analysis and interpretation. In order to help counter criticisms of over-simplification, those advocating such indices also suggest that the raw data be provided so as to allow disaggregation into component parts and hence facilitate a more subtle interpretation if a reader so desires. This paper examines the problems involved with interpreting indices of development by focusing on the United Nations Development Programmes (UNDP) Human Development Index (HDI) published each year in the Human Development Reports (HDRs). The HDI was intended to provide an alternative to the more economic based indices, such as GDP, commonly used within neo-liberal development agendas. The paper explores the use of the HDI as a gauge of human development by making comparisons between two major political and economic communities in Africa (ECOWAS and SADC). While the HDI did help highlight important changes in human development as expressed by the HDI over 10 years, it is concluded that the HDI and its components are difficult to interpret as methodologies have changed significantly and the 'averaging' nature of the HDI could hide information unless care is taken. The paper discusses the applicability of alternative models to the HDI such as the more neo-populist centred methods commonly advocated for indicators of sustainable development. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Pressing global environmental problems highlight the need to develop tools to measure progress towards "sustainability." However, some argue that any such attempt inevitably reflects the views of those creating such tools and only produce highly contested notions of "reality." To explore this tension, we critically assesses the Environmental Sustainability Index (ESI), a well-publicized product of the World Economic Forum that is designed to measure 'sustainability' by ranking nations on league tables based on extensive databases of environmental indicators. By recreating this index, and then using statistical tools (principal components analysis) to test relations between various components of the index, we challenge ways in which countries are ranked in the ESI. Based on this analysis, we suggest (1) that the approach taken to aggregate, interpret and present the ESI creates a misleading impression that Western countries are more sustainable than the developing world; (2) that unaccounted methodological biases allowed the authors of the ESI to over-generalize the relative 'sustainability' of different countries; and, (3) that this has resulted in simplistic conclusions on the relation between economic growth and environmental sustainability. This criticism should not be interpreted as a call for the abandonment of efforts to create standardized comparable data. Instead, this paper proposes that indicator selection and data collection should draw on a range of voices, including local stakeholders as well as international experts. We also propose that aggregating data into final league ranking tables is too prone to error and creates the illusion of absolute and categorical interpretations. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
This study examines the efficacy of published δ18O data from the calcite of Late Miocene surface dwelling planktonic foraminifer shells, for sea surface temperature estimates for the pre-Quaternary. The data are from 33 Late Miocene (Messinian) marine sites from a modern latitudinal gradient of 64°N to 48°S. They give estimates of SSTs in the tropics/subtropics (to 30°N and S) that are mostly cooler than present. Possible causes of this temperature discrepancy are ecological factors (e.g. calcification of shells at levels below the ocean mixed layer), taphonomic effects (e.g. diagenesis or dissolution), inaccurate estimation of Late Miocene seawater oxygen isotope composition, or a real Late Miocene cool climate. The scale of apparent cooling in the tropics suggests that the SST signal of the foraminifer calcite has been reset, at least in part, by early diagenetic calcite with higher δ18O, formed in the foraminifer shells in cool sea bottom pore waters, probably coupled with the effects of calcite formed below the mixed layer during the life of the foraminifera. This hypothesis is supported by the markedly cooler SST estimates from low latitudes—in some cases more than 9 °C cooler than present—where the gradients of temperature and the δ18O composition of seawater between sea surface and sea bottom are most marked, and where ocean surface stratification is high. At higher latitudes, particularly N and S of 30°, the temperature signal is still cooler, though maximum temperature estimates overlap with modern SSTs N and S of 40°. Comparison of SST estimates for the Late Miocene from alkenone unsaturation analysis from the eastern tropical Atlantic at Ocean Drilling Program (ODP) Site 958—which suggest a warmer sea surface by 2–4 °C, with estimates from oxygen isotopes at Deep Sea Drilling Project (DSDP) Site 366 and ODP Site 959, indicating cooler than present SSTs, also suggest a significant impact on the δ18O signal. Nevertheless, much of the original SST variation is clearly preserved in the primary calcite formed in the mixed layer, and records secular and temporal oceanographic changes at the sea surface, such as movement of the Antarctic Polar Front in the Southern Ocean. Cooler SSTs in the tropics and sub-tropics are also consistent with the Late Miocene latitude reduction in the coral reef belt and with interrupted reef growth on the Queensland Plateau of eastern Australia, though it is not possible to quantify absolute SSTs with the existing oxygen isotope data. Reconstruction of an accurate global SST dataset for Neogene time-slices from the existing published DSDP/ODP isotope data, for use in general circulation models, may require a detailed re-assessment of taphonomy at many sites.
Resumo:
In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.
Resumo:
This note presents a robust method for estimating response surfaces that consist of linear response regimes and a linear plateau. The linear response-and-plateau model has fascinated production scientists since von Liebig (1855) and, as Upton and Dalton indicated, some years ago in this Journal, the response-and-plateau model seems to fit the data in many empirical studies. The estimation algorithm evolves from Bayesian implementation of a switching-regression (finite mixtures) model and demonstrates routine application of Gibbs sampling and data augmentation-techniques that are now in widespread application in other disciplines.
Resumo:
The contribution investigates the problem of estimating the size of a population, also known as the missing cases problem. Suppose a registration system is targeting to identify all cases having a certain characteristic such as a specific disease (cancer, heart disease, ...), disease related condition (HIV, heroin use, ...) or a specific behavior (driving a car without license). Every case in such a registration system has a certain notification history in that it might have been identified several times (at least once) which can be understood as a particular capture-recapture situation. Typically, cases are left out which have never been listed at any occasion, and it is this frequency one wants to estimate. In this paper modelling is concentrating on the counting distribution, e.g. the distribution of the variable that counts how often a given case has been identified by the registration system. Besides very simple models like the binomial or Poisson distribution, finite (nonparametric) mixtures of these are considered providing rather flexible modelling tools. Estimation is done using maximum likelihood by means of the EM algorithm. A case study on heroin users in Bangkok in the year 2001 is completing the contribution.