92 resultados para rotated to zero
Resumo:
Using a flexible chemical box model with full heterogeneous chemistry, intercepts of chemically modified Langley plots have been computed for the 5 years of zenith-sky NO2 data from Faraday in Antarctica (65°S). By using these intercepts as the effective amount in the reference spectrum, drifts in zero of total vertical NO2 were much reduced. The error in zero of total NO2 is ±0.03×1015 moleccm−2 from one year to another. This error is small enough to determine trends in midsummer and any variability in denoxification between midwinters. The technique also suggests a more sensitive method for determining N2O5 from zenith-sky NO2 data.
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.
Resumo:
Fixed transactions costs that prohibit exchange engender bias in supply analysis due to censoring of the sample observations. The associated bias in conventional regression procedures applied to censored data and the construction of robust methods for mitigating bias have been preoccupations of applied economists since Tobin [Econometrica 26 (1958) 24]. This literature assumes that the true point of censoring in the data is zero and, when this is not the case, imparts a bias to parameter estimates of the censored regression model. We conjecture that this bias can be significant; affirm this from experiments; and suggest techniques for mitigating this bias using Bayesian procedures. The bias-mitigating procedures are based on modifications of the key step that facilitates Bayesian estimation of the censored regression model; are easy to implement; work well in both small and large samples; and lead to significantly improved inference in the censored regression model. These findings are important in light of the widespread use of the zero-censored Tobit regression and we investigate their consequences using data on milk-market participation in the Ethiopian highlands. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Housing in the UK accounts for 30.5% of all energy consumed and is responsible for 25% of all carbon emissions. The UK Government’s Code for Sustainable Homes requires all new homes to be zero carbon by 2016. The development and widespread diffusion of low and zero carbon (LZC) technologies is recognised as being a key solution for housing developers to deliver against this zero-carbon agenda. The innovation challenge to design and incorporate these technologies into housing developers’ standard design and production templates will usher in significant technical and commercial risks. In this paper we report early results from an ongoing Engineering and Physical Sciences Research Council project looking at the innovation logic and trajectory of LZC technologies in new housing. The principal theoretical lens for the research is the socio-technical network approach which considers actors’ interests and interpretative flexibilities of technologies and how they negotiate and reproduce ‘acting spaces’ to shape, in this case, the selection and adoption of LZC technologies. The initial findings are revealing the form and operation of the technology networks around new housing developments as being very complex, involving a range of actors and viewpoints that vary for each housing development.
Resumo:
This article examines utopian gestures and inaugural desires in two films which became symbolic of the Brazilian Film Revival in the late 1990s: Central Station (1998) and Midnight (1999). Both evolve around the idea of an overcrowded or empty centre in a country trapped between past and future, in which the motif of the zero stands for both the announcement and the negation of utopia. The analysis draws parallels between them and new wave films which also elaborate on the idea of the zero, with examples picked from Italian neo-realism, the Brazilian Cinema Novo and the New German Cinema. In Central Station, the ‘point zero’, or the core of the homeland, is retrieved in the archaic backlands, where political issues are resolved in the private sphere and the social drama turns into family melodrama. Midnight, in its turn, recycles Glauber Rocha’s utopian prophecies in the new millennium’s hour zero, when the earthly paradise represented by the sea is re-encountered by the middle-class character, but not by the poor migrant. In both cases, public injustice is compensated by the heroes’ personal achievements, but those do not refer to the real nation, its history or society. Their utopian breadth, based on nostalgia, citation and genre techniques, is of a virtual kind, attune to cinema only.
Resumo:
The response of a uniform horizontal temperature gradient to prescribed fixed heating is calculated in the context of an extended version of surface quasigeostrophic dynamics. It is found that for zero mean surface flow and weak cross-gradient structure the prescribed heating induces a mean temperature anomaly proportional to the spatial Hilbert transform of the heating. The interior potential vorticity generated by the heating enhances this surface response. The time-varying part is independent of the heating and satisfies the usual linearized surface quasigeostrophic dynamics. It is shown that the surface temperature tendency is a spatial Hilbert transform of the temperature anomaly itself. It then follows that the temperature anomaly is periodically modulated with a frequency proportional to the vertical wind shear. A strong local bound on wave energy is also found. Reanalysis diagnostics are presented that indicate consistency with key findings from this theory.
Resumo:
Under global warming, the predicted intensification of the global freshwater cycle will modify the net freshwater flux at the ocean surface. Since the freshwater flux maintains ocean salinity structures, changes to the density-driven ocean circulation are likely. A modified ocean circulation could further alter the climate, potentially allowing rapid changes, as seen in the past. The relevant feedback mechanisms and timescales are poorly understood in detail, however, especially at low latitudes where the effects of salinity are relatively subtle. In an attempt to resolve some of these outstanding issues, we present an investigation of the climate response of the low-latitude Pacific region to changes in freshwater forcing. Initiated from the present-day thermohaline structure, a control run of a coupled ocean-atmosphere general circulation model is compared with a perturbation run in which the net freshwater flux is prescribed to be zero over the ocean. Such an extreme experiment helps to elucidate the general adjustment mechanisms and their timescales. The atmospheric greenhouse gas concentrations are held constant, and we restrict our attention to the adjustment of the upper 1,000 m of the Pacific Ocean between 40°N and 40°S, over 100 years. In the perturbation run, changes to the surface buoyancy, near-surface vertical mixing and mixed-layer depth are established within 1 year. Subsequently, relative to the control run, the surface of the low-latitude Pacific Ocean in the perturbation run warms by an average of 0.6°C, and the interior cools by up to 1.1°C, after a few decades. This vertical re-arrangement of the ocean heat content is shown to be achieved by a gradual shutdown of the heat flux due to isopycnal (i.e. along surfaces of constant density) mixing, the vertical component of which is downwards at low latitudes. This heat transfer depends crucially upon the existence of density-compensating temperature and salinity gradients on isopycnal surfaces. The timescale of the thermal changes in the perturbation run is therefore set by the timescale for the decay of isopycnal salinity gradients in response to the eliminated freshwater forcing, which we demonstrate to be around 10-20 years. Such isopycnal heat flux changes may play a role in the response of the low-latitude climate to a future accelerated freshwater cycle. Specifically, the mechanism appears to represent a weak negative sea surface temperature feedback, which we speculate might partially shield from view the anthropogenically-forced global warming signal at low latitudes. Furthermore, since the surface freshwater flux is shown to play a role in determining the ocean's thermal structure, it follows that evaporation and/or precipitation biases in general circulation models are likely to cause sea surface temperature biases.
Resumo:
This contribution describes the optimization of chlorine extraction from silicate samples by pyrohydrolysis prior to the precise determination of Cl stable-isotope compositions (637 Cl) by gas source, dual inlet Isotope Ratio Mass Spectrometry (IRMS) on CH(3)Clg. The complete method was checked on three international reference materials for Cl-content and two laboratory glass standards. Whole procedure blanks are lower than 0. 5 mu mol, corresponding to less than 10 wt.% of most of the sample chloride analysed. In the absence of international chlorine isotope rock, we report here Cl extracted compared to accepted Cl contents and reproducibilities on Cl and delta Cl-37 measurements for the standard rocks. After extraction, the Cl contents of the three international references compared within error with the accepted values (mean yield = 94 +/-10%) with reproducibilities better than 12% (10). The laboratory glass standards - andesite SO100DS92 and phonolite S9(2) - were used specifically to test the effect of chloride amount on the measurements. They gave Cl extraction yields of 100 +/-6% (1 sigma-; n = 15) and 105 +/- 8% (1 sigma-; n = 7), respectively, with delta Cl-37 values of -0.51 0.14%o and -0.39 0.17%o (1g). In summary, for silicate samples with Cl contents between 39 and 9042 ppm, the Pyrohydrolysis/HPLC method leads to overall CI extraction yields of 100 8%, reproducibilities on Cl contents of 7% and on delta Cl-37 measurements of 0.12%o (all 1 sigma). The method was further applied to ten silicate rocks of various mineralogy and chemistry (meteorite, fresh MORB glasses, altered basalts and setpentinized peridotites) chosen for their large range of Cl contents (70-2156 ppm) and their geological significance. delta Cl-37 values range between -2.33 and -0.50%o. These strictly negative values contrast with the large range and mainly positive values previously reported for comparable silicate samples and shown here to be affected by analytical problems. Thus we propose a preliminary, revised terrestrial CI cycle, mainly dominated by negative and zero delta Cl-37 values. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Testing of the Integrated Nitrogen model for Catchments (INCA) in a wide range of ecosystem types across Europe has shown that the model underestimates N transformation processes to a large extent in northern catchments of Finland and Norway in winter and spring. It is found, and generally assumed, that microbial activity in soils proceeds at low rates at northern latitudes during winter, even at sub-zero temperatures. The INCA model was modified to improve the simulation of N transformation rates in northern catchments, characterised by cold climates and extensive snow accumulation and insulation in winter, by introducing an empirical function to simulate soil temperatures below the seasonal snow pack, and a degree-day model to calculate the depth of the snow pack. The proposed snow-correction factor improved the simulation of soil temperatures at Finnish and Norwegian field sites in winter, although soil temperature was still underestimated during periods with a thin snow cover. Finally, a comparison between the modified INCA version (v. 1.7) and the former version (v. 1.6) was made at the Simojoki river basin in northern Finland and at Dalelva Brook in northern Norway. The new modules did not imply any significant changes in simulated NO3- concentration levels in the streams but improved the timing of simulated higher concentrations. The inclusion of a modified temperature response function and an empirical snow-correction factor improved the flexibility and applicability of the model for climate effect studies.
Resumo:
We examine the stability of lamellar stacks in the presence of an electric field, E-0, applied normal to the lamellae. Calculations are performed with self-consistent field theory (SCFT) supplemented by an exact treatment of the electrostatic energy for linear dielectric materials. The calculations identify a critical electric field, E-0*, beyond which the lamellar stack becomes unstable with respect to undulations. This E-0* rapidly decreases towards zero as the number of lamellae in the stack diverges. Our quantitative predictions for E-0* are consistent with previous experimental measurements by Xu and co-workers.
Resumo:
Empirical orthogonal functions (EOFs) are widely used in climate research to identify dominant patterns of variability and to reduce the dimensionality of climate data. EOFs, however, can be difficult to interpret. Rotated empirical orthogonal functions (REOFs) have been proposed as more physical entities with simpler patterns than EOFs. This study presents a new approach for finding climate patterns with simple structures that overcomes the problems encountered with rotation. The method achieves simplicity of the patterns by using the main properties of EOFs and REOFs simultaneously. Orthogonal patterns that maximise variance subject to a constraint that induces a form of simplicity are found. The simplified empirical orthogonal function (SEOF) patterns, being more 'local'. are constrained to have zero loadings outside the main centre of action. The method is applied to winter Northern Hemisphere (NH) monthly mean sea level pressure (SLP) reanalyses over the period 1948-2000. The 'simplified' leading patterns of variability are identified and compared to the leading patterns obtained from EOFs and REOFs. Copyright (C) 2005 Royal Meteorological Society.
Resumo:
Rationalizing non-participation as a resource deficiency in the household, this paper identifies strategies for milk-market development in the Ethiopian highlands. The additional amounts of covariates required for Positive marketable surplus -'distances-to market'-are computed from a model in which production and sales are correlated; sales are left-censored at some Unobserved thresholds production efficiencies are heterogeneous: and the data are in the form of a panel. Incorporating these features into the modeling exercise ant because they are fundamental to the data-generating environment. There are four reasons. First, because production and sales decisions are enacted within the same household, both decisions are affected by the same exogenous shocks, and production and sales are therefore likely to be correlated. Second. because selling, involves time and time is arguably the most important resource available to a subsistence household, the minimum Sales amount is not zero but, rather, some unobserved threshold that lies beyond zero. Third. the Potential existence of heterogeneous abilities in management, ones that lie latent from the econometrician's perspective, suggest that production efficiencies should be permitted to vary across households. Fourth, we observe a single set of households during multiple visits in a single production year. The results convey clearly that institutional and production) innovations alone are insufficient to encourage participation. Market-precipitating innovation requires complementary inputs, especially improvements in human capital and reductions in risk. Copyright (c) 20 08 John Wiley & Sons, Ltd.
Resumo:
A cross-sectional study was conducted in Tanga and Iringa regions of Tanzania, and a longitudinal study in Tanga, to investigate tick-control methods and other factors influencing tick attachment to the cattle of smallholder dairy farms. Most farmers reported applying acaricides at intervals of 1-2 weeks, most used acaricides that require on-farm dilution and most farmers incorrectly diluted the acaricides. Rhipicephalus appendiculatus and Boophilus spp. ticks were those most-frequently encountered on the cattle, but few cattle carried ticks of any species (only 13 and 4.6% of tick counts of the cattle yielded adult R. appendiculatus and Boophilus spp., respectively). Animals were more likely to carry one or more adult Boophilus spp. ticks if they also carried one or more R. appendiculatus adults (OR = 14.4, CI = 9.2, 22.5). The use of pour-on acaricides was associated with lower odds that animals carried a R. appendiculatus tick (OR = 0.29, CI = 0. 18, 0.49) but higher odds that they carried a Boophilus spp. tick (OR = 2.48, CI = 1.55, 3.97). Animals > 4 months old and those with a recent history of grazing had higher odds of carrying either a R. appendiculatus (ORs = 3.41 and 2.58, CIs = 2.34, 4.98 and 1.80, 3.71), or a Boophilus spp. tick (ORs = 5.70 and 2.18, CIs = 2.34, 4.98 and 1.49. 3.25), but zero-grazing management did not prevent ticks attaching to cattle even when combined with high-frequency acaricide treatments. The odds that animals carried ticks varied amongst the agro-ecological zones (AEZs) and administrative districts where the farms were situated-but there was still considerable residual variation in tick infestation at the farm level. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Seed of 15 species of Brassicaceae were stored hermetically in a genebank (at -5 degrees C to -10 degrees C with c. 3% moisture content) for 40 years. Samples were withdrawn at intervals for germination tests. Many accessions showed an increase in ability to germinate over this period. due to loss in dormancy. Nevertheless, some dormancy remained after 40 years' storage and was broken by pre-applied gibberellic acid. The poorest seed survival occurred in Hormatophylla spinosa. Even in this accession the ability to germinate declined by only 7% between 1966 and 2006. Comparison of seeds from 1966 stored for 40 years with those collected anew in 2006 from the original sampling sites, where possible, showed few differences, other than a tendency (7 of 9 accessions) for the latter to show greater dormancy. These results for hermetic storage at sub-zero temperatures and low moisture contents confirm that long-term seed storage can provide a successful technology for ex situ plant biodiversity conservation.
Resumo:
Feed samples received by commercial analytical laboratories are often undefined or mixed varieties of forages, originate from various agronomic or geographical areas of the world, are mixtures (e.g., total mixed rations) and are often described incompletely or not at all. Six unified single equation approaches to predict the metabolizable energy (ME) value of feeds determined in sheep fed at maintenance ME intake were evaluated utilizing 78 individual feeds representing 17 different forages, grains, protein meals and by-product feedstuffs. The predictive approaches evaluated were two each from National Research Council [National Research Council (NRC), Nutrient Requirements of Dairy Cattle, seventh revised ed. National Academy Press, Washington, DC, USA, 2001], University of California at Davis (UC Davis) and ADAS (Stratford, UK). Slopes and intercepts for the two ADAS approaches that utilized in vitro digestibility of organic matter and either measured gross energy (GE), or a prediction of GE from component assays, and one UC Davis approach, based upon in vitro gas production and some component assays, differed from both unity and zero, respectively, while this was not the case for the two NRC and one UC Davis approach. However, within these latter three approaches, the goodness of fit (r(2)) increased from the NRC approach utilizing lignin (0.61) to the NRC approach utilizing 48 h in vitro digestion of neutral detergent fibre (NDF:0.72) and to the UC Davis approach utilizing a 30 h in vitro digestion of NDF (0.84). The reason for the difference between the precision of the NRC procedures was the failure of assayed lignin values to accurately predict 48 h in vitro digestion of NDF. However, differences among the six predictive approaches in the number of supporting assays, and their costs, as well as that the NRC approach is actually three related equations requiring categorical description of feeds (making them unsuitable for mixed feeds) while the ADAS and UC Davis approaches are single equations, suggests that the procedure of choice will vary dependent Upon local conditions, specific objectives and the feedstuffs to be evaluated. In contrast to the evaluation of the procedures among feedstuffs, no procedure was able to consistently discriminate the ME values of individual feeds within feedstuffs determined in vivo, suggesting that the quest for an accurate and precise ME predictive approach among and within feeds, may remain to be identified. (C) 2004 Elsevier B.V. All rights reserved.