82 resultados para contour tracing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sediment and P inputs to freshwaters from agriculture are a major problem in the United Kingdom (UK). This study investigated mitigation options for diffuse pollution losses from arable land. Field trials were undertaken at the hillslope scale over three winters at three UK sites with silt (Oxyaquic Hapludalf), sand (Udic Haplustept), and clay (Typic Haplaquept) soils. None of the mitigation treatments was effective in every year trialled, but each showed overall average reductions in losses. Over five site years, breaking up the compaction in tramlines (tractor wheel tracks) using a tine reduced losses of sediment and P to losses similar to those observed from areas without tramlines, with an average reduction in P loss of 1.06 kg TP ha(-1) Compared to traditional plowing, TP losses under minimum tillage were reduced by 0.30 kg TT ha(-1) over five site years, TP losses under contour cultivation were reduced by 0.30 kg TP ha(-1) over two site years, and TP losses using in-field barriers were reduced by 0.24 kg TP ha(-1) over two site years. In one site year, reductions in losses due to crop residue incorporation were nor significant. Each of the mitigation options trialled. is associated with a small cost at the farm-scale of up to 5 pound ha(-1), or with cost savings. The results indicate that each of the treatments his the potential to be a cost-effective mitigation option, but that tramline management is the most promising treatment, because tramlines dominate sediment and P transfer in surface runoff from arable hillslopes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of an organism's niche is central to ecological theory, but an operational definition is needed that allows both its experimental delineation and interpretation of field distributions of the species. Here we use population growth rate (hereafter, pgr) to de. ne the niche as the set of points in niche space where pgr. 0. If there are just two axes to the niche space, their relationship to pgr can be pictured as a contour map in which pgr varies along the axes in the same way that the height of land above sea level varies with latitude and longitude. In laboratory experiments we measured the pgr of Daphnia magna over a grid of values of pH and Ca2+, and so defined its "laboratory niche'' in pH-Ca2+ space. The position of the laboratory niche boundary suggests that population persistence is only possible above 0.5 mg Ca2+/L and between pH 5.75 and pH 9, though more Ca2+ is needed at lower pH values. To see how well the measured niche predicts the field distribution of D. magna, we examined relevant field data from 422 sites in England and Wales. Of the 58 colonized water bodies, 56 lay within the laboratory niche. Very few of the sites near the niche boundary were colonized, probably because pgr there is so low that populations are vulnerable to extinction by other factors. Our study shows how the niche can be quantified and used to predict field distributions successfully.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intact, enveloped coronavirus particles vary widely in size and contour, and are thus refractory to study by traditional structural means such as X-ray crystallography. Electron microscopy (EM) overcomes some problems associated with particle variability and has been an important tool for investigating coronavirus ultrastructure. However, EM sample preparation requires that the specimen be dried onto a carbon support film before imaging, collapsing internal particle structure in the case of coronaviruses. Moreover, conventional EM achieves image contrast by immersing the specimen briefly in heavy-metal-containing stain, which reveals some features while obscuring others. Electron cryomicroscopy (cryo-EM) instead employs a porous support film, to which the specimen is adsorbed and flash-frozen. Specimens preserved in vitreous ice over holes in the support film can then be imaged without additional staining. Cryo-EM, coupled with single-particle image analysis techniques, makes it possible to examine the size, structure and arrangement of coronavirus structural components in fully hydrated, native virions. Two virus purification procedures are described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forecasting the effects of stressors on the dynamics of natural populations requires assessment of the joint effects of a stressor and population density on the population response. The effects can be depicted as a contour map in which the population response, here assessed by Population growth rate, varies with stress and density in the same way that the height of land above sea level varies with latitude and longitude. We present the first complete map of this type using as our model Folsomia candida exposed to five different concentrations of the widespread anthelmintic veterinary medicine ivermectin in replicated microcosm experiments lasting 49 days. The concentrations of ivermectin in yeast were 0.0, 6.8 28.83 66.4 and 210.0 mg/L wet weight. Increasing density and chemical concentration both significantly reduced the population growth rate of Folsomia candida, in part through effects on food consumption and fecundity. The interaction between density and ivermectin concentration was "less-than-additive," implying that at high density populations were able to compensate for the effects of the chemical. This result demonstrates that regulatory protocols carried out at low density (as in most past experiments) may seriously overestimate effects in the field, where densities are locally high and populations are resource limited (e.g., in feces of livestock treated with ivermectin).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. To understand population dynamics in stressed environments it is necessary to join together two classical lines of research. Population responses to environmental stress have been studied at low density in life table response experiments. These show how the population's growth rate (pgr) at low density varies in relation to levels of stress. Population responses to density, on the other hand, are based on examination of the relationship between pgr and population density. 2. The joint effects of stress and density on pgr can be pictured as a contour map in which pgr varies with stress and density in the same way that the height of land above sea level varies with latitude and longitude. Here a microcosm experiment is reported that compared the joint effects of zinc and population density on the pgr of the springtail Folsomia candida (Collembola). 3. Our experiments allowed the plotting of a complete map of the effects of density and a stressor on pgr. Particularly important was the position of the pgr= 0 contour, which suggested that carrying capacity varied little with zinc concentration until toxic levels were reached. 4. This prediction accords well with observations of population abundance in the field. The method also allowed us to demonstrate, simultaneously, hormesis, toxicity, an Allee effect and density dependence. 5. The mechanisms responsible for these phenomena are discussed. As zinc is an essential trace element the initial increase in pgr is probably a consequence of dietary zinc deficiency. The Allee effect may be attributed to productivity of the environment increasing with density at low density. Density dependence is a result of food limitation. 6. Synthesis and applications. We illustrate a novel solution based on mapping a population's growth rate in relation to stress and population density. Our method allows us to demonstrate, simultaneously, hormesis, toxicity, an Allee effect and density dependence in an important ecological indicator species. We hope that the approach followed here will prove to have general applicability enabling predictions of field abundance to be made from estimates of the joint effects of the stressors and density on population growth rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The self-assembly into wormlike micelles of a poly(ethylene oxide)-b-poly(propylene oxide)-b-poly(ethylene oxide) triblock copolymer Pluronic P84 in aqueous salt solution (2 M NaCl) has been studied by rheology, small-angle X-ray and neutron scattering (SAXS/SANS), and light scattering. Measurements of the flow curves by controlled stress rheometry indicated phase separation under flow. SAXS on solutions subjected to capillary flow showed alignment of micelles at intermediate shear rates, although loss of alignment was observed for high shear rates. For dilute solutions, SAXS and static light scattering data on unaligned samples could be superposed over three decades in scattering vector, providing unique information on the wormlike micelle structure over several length scales. SANS data provided information on even shorter length scales, in particular, concerning "blob" scattering from the micelle corona. The data could be modeled based on a system of semiflexible self-avoiding cylinders with a circular cross-section, as described by the wormlike chain model with excluded volume interactions. The micelle structure was compared at two temperatures close to the cloud point (47 degrees C). The micellar radius was found not to vary with temperature in this region, although the contour length increased with increasing temperature, whereas the Kuhn length decreased. These variations result in an increase of the low-concentration radius of gyration with increasing temperature. This was consistent with dynamic light scattering results, and, applying theoretical results from the literature, this is in agreement with an increase in endcap energy due to changes in hydration of the poly(ethylene oxide) blocks as the temperature is increased.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rolling Contact Fatigue (RCF) is one of the main issues that concern, at least initially, the head of the railway; progressively they can be of very high importance as they can propagate inside the material with the risk of damaging the railway. In this work, two different non-destructive techniques, infrared thermography (IRT) and fibre optics microscopy (FOM), were used in the inspection of railways for the tracing of defects and deterioration signs. In the first instance, two different approaches (dynamic and pulsed thermography) were used, whilst in the case of FOM, microscopic characterisation of the railway heads and classification of the deterioration -- damage on the railways according to the UIC (International Union of Railways) code, took place. Results from both techniques are presented and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ulcerative colitis is characterised by impairment of the epithelial barrier and tight junction alterations resulting in increased intestinal permeability. UC is less common in smokers with smoking reported to decrease paracellular permeability. The aim of this study was thus to determine the effect of nicotine, the major constituent in cigarettes and its metabolites on the integrity of tight junctions in Caco-2 cell monolayers. The integrity of Caco-2 tight junctions was analysed by measuring the transepithelial electrical resistance (TER) and by tracing the flux of the fluorescent marker fluorescein, after treatment with various concentrations of nicotine or nicotine metabolites over 48 h. TER was significantly higher compared to the control for all concentrations of nicotine 0.01-10 M at 48 h (p < 0.001), and for 0.01 mu M (p < 0.001) and 0.1 mu M and 10 M nicotine (p < 0.01) at 12 and 24 h. The fluorescein flux results supported those of the TER assay. TER readings for all nicotine metabolites tested were also higher at 24 and 48 h only (p <= 0.01). Western blot analysis demonstrated that nicotine up-regulated the expression of the tight junction proteins occludin and claudin-l (p < 0.01). Overall, it appears that nicotine and its metabolites, at concentrations corresponding to those reported in the blood of smokers, can significantly improve tight junction integrity, and thus, decrease epithelial gut permeability. We have shown that in vitro, nicotine appears more potent than its metabolites in decreasing epithelial gut permeability. We speculate that this enhanced gut barrier may be the result of increased expression of claudin-l and occludin proteins, which are associated with the formation of tight junctions. These findings may help explain the mechanism of action of nicotine treatment and indeed smoking in reducing epithelial gut permeability. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The acute hippocampal brain slice preparation is an important in vitro screening tool for potential anticonvulsants. Application of 4-aminopyridine (4-AP) or removal of external Mg2+ ions induces epileptiform bursting in slices which is analogous to electrical brain activity seen in status epilepticus states. We have developed these epileptiform models for use with multi-electrode arrays (MEAs), allowing recording across the hippocampal slice surface from 59 points. We present validation of this novel approach and analyses using two anticonvulsants, felbamate and phenobarbital, the effects of which have already been assessed in these models using conventional extracellular recordings. In addition to assessing drug effects on commonly described parameters (duration, amplitude and frequency), we describe novel methods using the MEA to assess burst propagation speeds and the underlying frequencies that contribute to the epileptiform activity seen. Contour plots are also used as a method of illustrating burst activity. Finally, we describe hitherto unreported properties of epileptiform, bursting induced by 100 mu M 4-AP or removal of external Mg2+ ions. Specifically, we observed decreases over time in burst amplitude and increase over time in burst frequency in the absence of additional pharmacological interventions. These MEA methods enhance the depth, quality and range of data that can be derived from the hippocampal slice preparation compared to conventional extracellular recordings. it may also uncover additional modes of action that contribute to anti-epileptiform drug effects. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a new method for reconstructing 3D surface using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed object's surface is represented a set of triangular facets. We empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points optimally cluster closely on a highly curved part of the surface and are widely, spread on smooth or fat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not undersampled or underrepresented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a method for reconstructing 3D frontier points, contour generators and surfaces of anatomical objects or smooth surfaces from a small number, e. g. 10, of conventional 2D X-ray images. The X-ray images are taken at different viewing directions with full prior knowledge of the X-ray source and sensor configurations. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the number of viewing directions is fixed and the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The technique may be used not only in medicine but also in industrial applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The River Lugg has particular problems with high sediment loads that have resulted in detrimental impacts on ecology and fisheries. A new dynamic, process-based model of hydrology and sediments (INCA- SED) has been developed and applied to the River Lugg system using an extensive data set from 1995–2008. The model simulates sediment sources and sinks throughout the catchment and gives a good representation of the sediment response at 22 reaches along the River Lugg. A key question considered in using the model is the management of sediment sources so that concentrations and bed loads can be reduced in the river system. Altogether, five sediment management scenarios were selected for testing on the River Lugg, including land use change, contour tillage, hedging and buffer strips. Running the model with parameters altered to simulate these five scenarios produced some interesting results. All scenarios achieved some reduction in sediment levels, with the 40% land use change achieving the best result with a 19% reduction. The other scenarios also achieved significant reductions of between 7% and 9%. Buffer strips produce the best result at close to 9%. The results suggest that if hedge introduction, contour tillage and buffer strips were all applied, sediment reductions would total 24%, considerably improving the current sediment situation. We present a novel cost-effectiveness analysis of our results where we use percentage of land removed from production as our cost function. Given the minimal loss of land associated with contour tillage, hedges and buffer strips, we suggest that these management practices are the most cost-effective combination to reduce sediment loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The usefulness of any simulation of atmospheric tracers using low-resolution winds relies on both the dominance of large spatial scales in the strain and time dependence that results in a cascade in tracer scales. Here, a quantitative study on the accuracy of such tracer studies is made using the contour advection technique. It is shown that, although contour stretching rates are very insensitive to the spatial truncation of the wind field, the displacement errors in filament position are sensitive. A knowledge of displacement characteristics is essential if Lagrangian simulations are to be used for the inference of airmass origin. A quantitative lower estimate is obtained for the tracer scale factor (TSF): the ratio of the smallest resolved scale in the advecting wind field to the smallest “trustworthy” scale in the tracer field. For a baroclinic wave life cycle the TSF = 6.1 ± 0.3 while for the Northern Hemisphere wintertime lower stratosphere the TSF = 5.5 ± 0.5, when using the most stringent definition of the trustworthy scale. The similarity in the TSF for the two flows is striking and an explanation is discussed in terms of the activity of potential vorticity (PV) filaments. Uncertainty in contour initialization is investigated for the stratospheric case. The effect of smoothing initial contours is to introduce a spinup time, after which wind field truncation errors take over from initialization errors (2–3 days). It is also shown that false detail from the proliferation of finescale filaments limits the useful lifetime of such contour advection simulations to 3σ−1 days, where σ is the filament thinning rate, unless filaments narrower than the trustworthy scale are removed by contour surgery. In addition, PV analysis error and diabatic effects are so strong that only PV filaments wider than 50 km are at all believable, even for very high-resolution winds. The minimum wind field resolution required to accurately simulate filaments down to the erosion scale in the stratosphere (given an initial contour) is estimated and the implications for the modeling of atmospheric chemistry are briefly discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.