23 resultados para Independence of Venezuela

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a recent investigation, Landsat TM and ETM+ data were used to simulate different resolutions of remotely-sensed images (from 30 to 1100 m) and to analyze the effect of resolution on a range of landscape metrics associated with spatial patterns of forest fragmentation in Chapare, Bolivia since the mid-1980s. Whereas most metrics were found to be highly dependent on pixel size, several fractal metrics (DLFD, MPFD, and AWMPFD) were apparently independent of image resolution, in contradiction with a sizeable body of literature indicating that fractal dimensions of natural objects depend strongly on image characteristics. The present re-analysis of the Chapare images, using two alternative algorithms routinely used for the evaluation of fractal dimensions, shows that the values of the box-counting and information fractal dimensions are systematically larger, sometimes by as much as 85%, than the "fractal" indices DLFD, MPFD, and AWMFD for the same images. In addition, the geometrical fractal features of the forest and non-forest patches in the Chapare region strongly depend on the resolution of images used in the analysis. The largest dependency on resolution occurs for the box-counting fractal dimension in the case of the non-forest patches in 1993, where the difference between the 30 and I 100 m-resolution images corresponds to 24% of the full theoretical range (1.0 to 2.0) of the mass fractal dimension. The observation that the indices DLFD, MPFD, and AWMPFD, unlike the classical fractal dimensions, appear relatively unaffected by resolution in the case of the Chapare images seems due essentially to the fact that these indices are based on a heuristic, "non-geometric" approach to fractals. Because of their lack of a foundation in fractal geometry, nothing guarantees that these indices will be resolution-independent in general. (C) 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 × 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture–recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture–recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 x 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture-recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture-recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The capacity of the surface glycoproteins of enveloped viruses to mediate virus/cell binding and membrane fusion requires a proper thiol/disulfide balance. Chemical manipulation of their redox state using reducing agents or free sulfhydryl reagents affects virus/cell interaction. Conversely, natural thiol/disulfide rearrangements often occur during the cell interaction to trigger fusogenicity, hence the virus entry. We examined the relationship between the redox state of the 20 cysteine residues of the SARS-CoV (severe acute respiratory syndrome coronavirus) Spike glycoprotein S1 subdomain and its functional properties. Mature S1 exhibited similar to 4 unpaired cysteines, and chemically reduced S1 displaying up to similar to 6 additional unpaired cysteines still bound ACE2 and enabled fusion. In addition, virus/cell membrane fusion occurred in the presence of sulfhydryl-blocking reagents and oxidoreductase inhibitors. Thus, in contrast to various viruses including HIV (human immunodeficiency virus) examined in parallel, the functions of the SARS-CoV Spike glycoprotein exhibit a significant and surprising independence of redox state, which may contribute to the wide host range of the virus. These data suggest clues for molecularly engineering vaccine immunogens.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new record of sea surface temperature (SST) for climate applications is described. This record provides independent corroboration of global variations estimated from SST measurements made in situ. Infrared imagery from Along-Track Scanning Radiometers (ATSRs) is used to create a 20 year time series of SST at 0.1° latitude-longitude resolution, in the ATSR Reprocessing for Climate (ARC) project. A very high degree of independence of in situ measurements is achieved via physics-based techniques. Skin SST and SST estimated for 20 cm depth are provided, with grid cell uncertainty estimates. Comparison with in situ data sets establishes that ARC SSTs generally have bias of order 0.1 K or smaller. The precision of the ARC SSTs is 0.14 K during 2003 to 2009, from three-way error analysis. Over the period 1994 to 2010, ARC SSTs are stable, with better than 95% confidence, to within 0.005 K yr−1(demonstrated for tropical regions). The data set appears useful for cleanly quantifying interannual variability in SST and major SST anomalies. The ARC SST global anomaly time series is compared to the in situ-based Hadley Centre SST data set version 3 (HadSST3). Within known uncertainties in bias adjustments applied to in situ measurements, the independent ARC record and HadSST3 present the same variations in global marine temperature since 1996. Since the in situ observing system evolved significantly in its mix of measurement platforms and techniques over this period, ARC SSTs provide an important corroboration that HadSST3 accurately represents recent variability and change in this essential climate variable.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new coefficient-based retrieval scheme for estimation of sea surface temperature (SST) from the Along Track Scanning Radiometer (ATSR) instruments. The new coefficients are banded by total column water vapour (TCWV), obtained from numerical weather prediction analyses. TCWV banding reduces simulated regional retrieval biases to < 0.1 K compared to biases ~ 0.2 K for global coefficients. Further, detailed treatment of the instrumental viewing geometry reduces simulated view-angle related biases from ~ 0.1 K down to < 0.005 K for dual-view retrievals using channels at 11 and 12 μm. A novel analysis of trade-offs related to the assumed noise level when defining coefficients is undertaken, and we conclude that adding a small nominal level of noise (0.01 K) is optimal for our purposes. When applied to ATSR observations, some inter-algorithm biases appear as TCWV-related differences in SSTs estimated from different channel combinations. The final step in coefficient determination is to adjust the offset coefficient in each TCWV band to match results from a reference algorithm. This reference uses the dual-view observations of 3.7 and 11 μm. The adjustment is independent of in situ measurements, preserving independence of the retrievals. The choice of reference is partly motivated by uncertainty in the calibration of the 12 μm of Advanced ATSR. Lastly, we model the sensitivities of the new retrievals to changes to TCWV and changes in true SST, confirming that dual-view SSTs are most appropriate for climatological applications

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, the monetary policy independence of European nations in the years before European Economic and Monetary Union (EMU) is investigated using cointegration techniques. Daily data is used to assess pairwise relationships between individual EMU nations and ‘lead’ nation Germany, to assess the hypothesis that Germany was the dominant European nation prior to EMU. By and large our econometric investigations support this hypothesis, and lead us to conclude that the only European nation to lose monetary policy independence in the light of monetary union was Germany. Our results have important policy implications. Given that the loss of monetary policy independence is generally viewed as the main cost of monetary unification, our findings suggest a reconsideration of the costs and benefits of monetary integration. A country can only lose what it has, and in Europe the countries that joined EMU — spare Germany — apparently did not have much to lose, at least not in terms of monetary independence. Instead, they actually gained monetary policy influence by getting a seat in the ECB's governing council which is responsible for setting interest policy in the euro area.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Svalgaard (2014) has recently pointed out that the calibration of the Helsinki magnetic observatory’s H component variometer was probably in error in published data for the years 1866–1874.5 and that this makes the interdiurnal variation index based on daily means, IDV(1d), (Lockwood et al., 2013a), and the interplanetary magnetic field strength derived from it (Lockwood et al., 2013b), too low around the peak of solar cycle 11. We use data from the modern Nurmijarvi station, relatively close to the site of the original Helsinki Observatory, to confirm a 30% underestimation in this interval and hence our results are fully consistent with the correction derived by Svalgaard. We show that the best method for recalibration uses the Helsinki Ak(H) and aa indices and is accurate to ±10 %. This makes it preferable to recalibration using either the sunspot number or the diurnal range of geomagnetic activity which we find to be accurate to ±20 %. In the case of Helsinki data during cycle 11, the two recalibration methods produce very similar corrections which are here confirmed using newly digitised data from the nearby St Petersburg observatory and also using declination data from Helsinki. However, we show that the IDV index is, compared to later years, too similar to sunspot number before 1872, revealing independence of the two data series has been lost; either because the geomagnetic data used to compile IDV has been corrected using sunspot numbers, or vice versa, or both. We present corrected data sequences for both the IDV(1d) index and the reconstructed IMF (interplanetary magnetic field).We also analyse the relationship between the derived near-Earth IMF and the sunspot number and point out the relevance of the prior history of solar activity, in addition to the contemporaneous value, to estimating any “floor” value of the near-Earth interplanetary field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on a combined internet and mail survey in Germany the independence of indica-tors of trust in public authorities from indicators of attitudes toward genetically modified food is tested. Despite evidence of a link between trust indicators on the one hand and evaluation of benefits and perceived likelihoods of risks, correlation with other factors is found to be moderate on average. But the trust indicators exhibit only a moderate relation with the re-spondents’ preference for either sole public control or a cooperation of public and private bodies in the monitoring of GM food distribution. Instead, age and location in either the New or the Old Lander are found to be significantly related with such preferences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Contemporary research in generative second language (L2) acquisition has attempted to address observable target-deviant aspects of L2 grammars within a UG-continuity framework (e.g. Lardiere 2000; Schwartz 2003; Sprouse 2004; Prévost & White 1999, 2000). With the aforementioned in mind, the independence of pragmatic and syntactic development, independently observed elsewhere (e.g. Grodzinsky & Reinhart 1993; Lust et al. 1986; Pacheco & Flynn 2005; Serratrice, Sorace & Paoli 2004), becomes particularly interesting. In what follows, I examine the resetting of the Null-Subject Parameter (NSP) for English learners of L2 Spanish. I argue that insensitivity to associated discoursepragmatic constraints on the discursive distribution of overt/null subjects accounts for what appear to be particular errors as a result of syntactic deficits. It is demonstrated that despite target-deviant performance, the majority must have native-like syntactic competence given their knowledge of the Overt Pronoun Constraint (Montalbetti 1984), a principle associated with the Spanish-type setting of the NSP.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This special issue is focused on the assessment of algorithms for the observation of Earth’s climate from environ- mental satellites. Climate data records derived by remote sensing are increasingly a key source of insight into the workings of and changes in Earth’s climate system. Producers of data sets must devote considerable effort and expertise to maximise the true climate signals in their products and minimise effects of data processing choices and changing sensors. A key choice is the selection of algorithm(s) for classification and/or retrieval of the climate variable. Within the European Space Agency Climate Change Initiative, science teams undertook systematic assessment of algorithms for a range of essential climate variables. The papers in the special issue report some of these exercises (for ocean colour, aerosol, ozone, greenhouse gases, clouds, soil moisture, sea surface temper- ature and glaciers). The contributions show that assessment exercises must be designed with care, considering issues such as the relative importance of different aspects of data quality (accuracy, precision, stability, sensitivity, coverage, etc.), the availability and degree of independence of validation data and the limitations of validation in characterising some important aspects of data (such as long-term stability or spatial coherence). As well as re- quiring a significant investment of expertise and effort, systematic comparisons are found to be highly valuable. They reveal the relative strengths and weaknesses of different algorithmic approaches under different observa- tional contexts, and help ensure that scientific conclusions drawn from climate data records are not influenced by observational artifacts, but are robust.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Assigning probabilities to alleged relationships, given DNA profiles, requires, among other things, calculation of a likelihood ratio (LR). Such calculations usually assume independence of genes: this assumption is not appropriate when the tested individuals share recent ancestry due to population substructure. Adjusted LR formulae, incorporating the coancestry coefficient F(ST), are presented here for various two-person relationships, and the issue of mutations in parentage testing is also addressed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although the independence of the association and causality has not been fully established, non-fasting (postprandial) triglyceride (TG) concentrations have emerged as a clinically significant cardiovascular disease (CVD) risk factor. In the current review, findings from three insightful prospective studies in the area, namely the Women's Health Study, the Copenhagen City Heart Study and the Norwegian Counties Study, are discussed. An overview is provided as to the likely etiological basis for the association between postprandial TG and CVD, with a focus on both lipid and non-lipid (inflammation, hemostasis and vascular function) risk factors. The impact of various lifestyle and physiological determinants are considered, in particular genetic variation and meal fat composition. Furthermore, although data is limited some information is provided as to the relative and interactive impact of a number of modulators of lipemia. It is evident that relative to age, gender and body mass index (known modulators of postprandial lipemia), the contribution of identified gene variants to the heterogeneity observed in the postprandial response is likely to be relatively small. Finally, we highlight the need for the development of a standardised ‘fat tolerance test’ for use in clinical trials, to allow the integration and comparison of data from individual studies