107 resultados para Hysteretic Down-Sampling
Resumo:
An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field.
Resumo:
The beds of active ice streams in Greenland and Antarctica are largely inaccessible, hindering a full understanding of the processes that initiate, sustain and inhibit fast ice flow in ice sheets. Detailed mapping of the glacial geomorphology of palaeo-ice stream tracks is, therefore, a valuable tool for exploring the basal processes that control their behaviour. In this paper we present a map that shows detailed glacial geomorphology from a part of the Dubawnt Lake Palaeo-Ice Stream bed on the north-western Canadian Shield (Northwest Territories), which operated at the end of the last glacial cycle. The map (centred on 63 degrees 55 '' 42'N, 102 degrees 29 '' 11'W, approximate scale 1:90,000) was compiled from digital Landsat Enhanced Thematic Mapper Plus satellite imagery and digital and hard-copy stereo-aerial photographs. The ice stream bed is dominated by parallel mega-scale glacial lineations (MGSL), whose lengths exceed several kilometres but the map also reveals that they have, in places, been superimposed with transverse ridges known as ribbed moraines. The ribbed moraines lie on top of the MSGL and appear to have segmented the individual lineaments. This indicates that formation of the ribbed moraines post-date the formation of the MSGL. The presence of ribbed moraine in the onset zone of another palaeo-ice stream has been linked to oscillations between cold and warm-based ice and/or a patchwork of cold-based areas which led to acceleration and deceleration of ice velocity. Our hypothesis is that the ribbed moraines on the Dubawnt Lake Ice Stream bed are a manifestation of the process that led to ice stream shut-down and may be associated with the process of basal freeze-on. The precise formation of ribbed moraines, however, remains open to debate and field observation of their structure will provide valuable data for formal testing of models of their formation.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Although depressed mood is a normal occurrence in response to adversity in all individuals, what distinguishes those who are vulnerable to major depressive disorder (MDD) is their inability to effectively regulate negative mood when it arises. Investigating the neural underpinnings of adaptive emotion regulation and the extent to which such processes are compromised in MDD may be helpful in understanding the pathophysiology of depression. We report results from a functional magnetic resonance imaging study demonstrating left-lateralized activation in the prefrontal cortex (PFC) when downregulating negative affect in nondepressed individuals, whereas depressed individuals showed bilateral PFC activation. Furthermore, during an effortful affective reappraisal task, nondepressed individuals showed an inverse relationship between activation in left ventrolateral PFC and the amygdala that is mediated by the ventromedial PFC (VMPFC). No such relationship was found for depressed individuals, who instead show a positive association between VMPFC and amygdala. Pupil dilation data suggest that those depressed patients who expend more effort to reappraise negative stimuli are characterized by accentuated activation in the amygdala, insula, and thalamus, whereas nondepressed individuals exhibit the opposite pattern. These findings indicate that a key feature underlying the pathophysiology of major depression is the counterproductive engagement of right prefrontal cortex and the lack of engagement of left lateral-ventromedial prefrontal circuitry important for the downregulation of amygdala responses to negative stimuli.
Resumo:
[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
Observations show the oceans have warmed over the past 40 yr. with appreciable regional variation and more warming at the surface than at depth. Comparing the observations with results from two coupled ocean-atmosphere climate models [the Parallel Climate Model version 1 (PCM) and the Hadley Centre Coupled Climate Model version 3 (HadCM3)] that include anthropogenic forcing shows remarkable agreement between the observed and model-estimated warming. In this comparison the models were sampled at the same locations as gridded yearly observed data. In the top 100 m of the water column the warming is well separated from natural variability, including both variability arising from internal instabilities of the coupled ocean-atmosphere climate system and that arising from volcanism and solar fluctuations. Between 125 and 200 m the agreement is not significant, but then increases again below this level, and remains significant down to 600 m. Analysis of PCM's heat budget indicates that the warming is driven by an increase in net surface heat flux that reaches 0.7 W m(-2) by the 1990s; the downward longwave flux increases bv 3.7 W m(-2). which is not fully compensated by an increase in the upward longwave flux of 2.2 W m(-2). Latent and net solar heat fluxes each decrease by about 0.6 W m(-2). The changes in the individual longwave components are distinguishable from the preindustrial mean by the 1920s, but due to cancellation of components. changes in the net surface heat flux do not become well separated from zero until the 1960s. Changes in advection can also play an important role in local ocean warming due to anthropogenic forcing, depending, on the location. The observed sampling of ocean temperature is highly variable in space and time. but sufficient to detect the anthropogenic warming signal in all basins, at least in the surface layers, bv the 1980s.
Resumo:
During the descent into the recent ‘exceptionally’ low solar minimum, observations have revealed a larger change in solar UV emissions than seen at the same phase of previous solar cycles. This is particularly true at wavelengths responsible for stratospheric ozone production and heating. This implies that ‘top-down’ solar modulation could be a larger factor in long-term tropospheric change than previously believed, many climate models allowing only for the ‘bottom-up’ effect of the less-variable visible and infrared solar emissions. We present evidence for long-term drift in solar UV irradiance, which is not found in its commonly used proxies. In addition, we find that both stratospheric and tropospheric winds and temperatures show stronger regional variations with those solar indices that do show long-term trends. A top-down climate effect that shows long-term drift (and may also be out of phase with the bottom-up solar forcing) would change the spatial response patterns and would mean that climate-chemistry models that have sufficient resolution in the stratosphere would become very important for making accurate regional/seasonal climate predictions. Our results also provide a potential explanation of persistent palaeoclimate results showing solar influence on regional or local climate indicators.
Down-regulation of the CSLF6 gene results in decreased (1,3;1,4)-beta-D-glucan in endosperm of wheat
Resumo:
(1,3;1,4)-beta-d-Glucan (beta-glucan) accounts for 20% of the total cell walls in the starchy endosperm of wheat (Triticum aestivum) and is an important source of dietary fiber for human nutrition with potential health benefits. Bioinformatic and array analyses of gene expression profiles in developing caryopses identified the CELLULOSE SYNTHASE-LIKE F6 (CSLF6) gene as encoding a putative beta-glucan synthase. RNA interference constructs were therefore designed to down-regulate CSLF6 gene expression and expressed in transgenic wheat under the control of a starchy endosperm-specific HMW subunit gene promoter. Analysis of wholemeal flours using an enzyme-based kit and by high-performance anion-exchange chromatography after digestion with lichenase showed decreases in total beta-glucan of between 30% and 52% and between 36% and 53%, respectively, in five transgenic lines compared to three control lines. The content of water-extractable beta-glucan was also reduced by about 50% in the transgenic lines, and the M(r) distribution of the fraction was decreased from an average of 79 to 85 x 10(4) g/mol in the controls and 36 to 57 x 10(4) g/mol in the transgenics. Immunolocalization of beta-glucan in semithin sections of mature and developing grains confirmed that the impact of the transgene was confined to the starchy endosperm with little or no effect on the aleurone or outer layers of the grain. The results confirm that the CSLF6 gene of wheat encodes a beta-glucan synthase and indicate that transgenic manipulation can be used to enhance the health benefits of wheat products.
Resumo:
Pollination of Cyclamen persicum (Primulaceae) was studied in two wild populations in Israel. Buzz-pollination proved to be extremely rare, and performed by a large Anthophora bee only. The most frequent pollinators were various unspecialized species of thrips (Thysanoptera) and hoverflies (Syrphidae). In the Winter-flowering populations the commonest visitor was a small primitive moth, Micropteris elegans (Micropterigidae, Lepidoptera). These moths feed on pollen, copulate and oviposit within the flowers. From the rarity of buzz-pollination it is concluded that the genus Cyclamen co-evolved with large bees capable of buzz-pollination, but lost its original pollinators for unknown historical reasons. The vacant niche was then open to various unspecialized pollen consumers such as thrips, hoverflies and small solitary bees. While these insects are not specific to C. persicum and seem to play a minor role only, the moth strictly relies upon Cyclamen and seems to be the most efficient pollinator.
Resumo:
The soil fauna is often a neglected group in many large-scale studies of farmland biodiversity due to difficulties in extracting organisms efficiently from the soil. This study assesses the relative efficiency of the simple and cheap sampling method of handsorting against Berlese-Tullgren funnel and Winkler apparatus extraction. Soil cores were taken from grassy arable field margins and wheat fields in Cambridgeshire, UK, and the efficiencies of the three methods in assessing the abundances and species densities of soil macroinver-tebrates were compared. Handsorting in most cases was as efficient at extracting the majority of the soil macrofauna as the Berlese-Tullgren funnel and Winkler bag methods, although it underestimated the species densities of the woodlice and adult beetles. There were no obvious biases among the three methods for the particular vegetation types sampled and no significant differences in the size distributions of the earthworms and beetles. Proportionally fewer damaged earthworms were recorded in larger (25 x 25 cm) soil cores when compared with smaller ones (15 x 15 cm). Handsorting has many benefits, including targeted extraction, minimum disturbance to the habitat and shorter sampling periods and may be the most appropriate method for studies of farmland biodiversity when a high number of soil cores need to be sampled. (C) 2008 Elsevier Masson SAS. All rights reserved.