977 resultados para contour tracing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The self-assembly into wormlike micelles of a poly(ethylene oxide)-b-poly(propylene oxide)-b-poly(ethylene oxide) triblock copolymer Pluronic P84 in aqueous salt solution (2 M NaCl) has been studied by rheology, small-angle X-ray and neutron scattering (SAXS/SANS), and light scattering. Measurements of the flow curves by controlled stress rheometry indicated phase separation under flow. SAXS on solutions subjected to capillary flow showed alignment of micelles at intermediate shear rates, although loss of alignment was observed for high shear rates. For dilute solutions, SAXS and static light scattering data on unaligned samples could be superposed over three decades in scattering vector, providing unique information on the wormlike micelle structure over several length scales. SANS data provided information on even shorter length scales, in particular, concerning "blob" scattering from the micelle corona. The data could be modeled based on a system of semiflexible self-avoiding cylinders with a circular cross-section, as described by the wormlike chain model with excluded volume interactions. The micelle structure was compared at two temperatures close to the cloud point (47 degrees C). The micellar radius was found not to vary with temperature in this region, although the contour length increased with increasing temperature, whereas the Kuhn length decreased. These variations result in an increase of the low-concentration radius of gyration with increasing temperature. This was consistent with dynamic light scattering results, and, applying theoretical results from the literature, this is in agreement with an increase in endcap energy due to changes in hydration of the poly(ethylene oxide) blocks as the temperature is increased.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rolling Contact Fatigue (RCF) is one of the main issues that concern, at least initially, the head of the railway; progressively they can be of very high importance as they can propagate inside the material with the risk of damaging the railway. In this work, two different non-destructive techniques, infrared thermography (IRT) and fibre optics microscopy (FOM), were used in the inspection of railways for the tracing of defects and deterioration signs. In the first instance, two different approaches (dynamic and pulsed thermography) were used, whilst in the case of FOM, microscopic characterisation of the railway heads and classification of the deterioration -- damage on the railways according to the UIC (International Union of Railways) code, took place. Results from both techniques are presented and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ulcerative colitis is characterised by impairment of the epithelial barrier and tight junction alterations resulting in increased intestinal permeability. UC is less common in smokers with smoking reported to decrease paracellular permeability. The aim of this study was thus to determine the effect of nicotine, the major constituent in cigarettes and its metabolites on the integrity of tight junctions in Caco-2 cell monolayers. The integrity of Caco-2 tight junctions was analysed by measuring the transepithelial electrical resistance (TER) and by tracing the flux of the fluorescent marker fluorescein, after treatment with various concentrations of nicotine or nicotine metabolites over 48 h. TER was significantly higher compared to the control for all concentrations of nicotine 0.01-10 M at 48 h (p < 0.001), and for 0.01 mu M (p < 0.001) and 0.1 mu M and 10 M nicotine (p < 0.01) at 12 and 24 h. The fluorescein flux results supported those of the TER assay. TER readings for all nicotine metabolites tested were also higher at 24 and 48 h only (p <= 0.01). Western blot analysis demonstrated that nicotine up-regulated the expression of the tight junction proteins occludin and claudin-l (p < 0.01). Overall, it appears that nicotine and its metabolites, at concentrations corresponding to those reported in the blood of smokers, can significantly improve tight junction integrity, and thus, decrease epithelial gut permeability. We have shown that in vitro, nicotine appears more potent than its metabolites in decreasing epithelial gut permeability. We speculate that this enhanced gut barrier may be the result of increased expression of claudin-l and occludin proteins, which are associated with the formation of tight junctions. These findings may help explain the mechanism of action of nicotine treatment and indeed smoking in reducing epithelial gut permeability. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The acute hippocampal brain slice preparation is an important in vitro screening tool for potential anticonvulsants. Application of 4-aminopyridine (4-AP) or removal of external Mg2+ ions induces epileptiform bursting in slices which is analogous to electrical brain activity seen in status epilepticus states. We have developed these epileptiform models for use with multi-electrode arrays (MEAs), allowing recording across the hippocampal slice surface from 59 points. We present validation of this novel approach and analyses using two anticonvulsants, felbamate and phenobarbital, the effects of which have already been assessed in these models using conventional extracellular recordings. In addition to assessing drug effects on commonly described parameters (duration, amplitude and frequency), we describe novel methods using the MEA to assess burst propagation speeds and the underlying frequencies that contribute to the epileptiform activity seen. Contour plots are also used as a method of illustrating burst activity. Finally, we describe hitherto unreported properties of epileptiform, bursting induced by 100 mu M 4-AP or removal of external Mg2+ ions. Specifically, we observed decreases over time in burst amplitude and increase over time in burst frequency in the absence of additional pharmacological interventions. These MEA methods enhance the depth, quality and range of data that can be derived from the hippocampal slice preparation compared to conventional extracellular recordings. it may also uncover additional modes of action that contribute to anti-epileptiform drug effects. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a new method for reconstructing 3D surface using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed object's surface is represented a set of triangular facets. We empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points optimally cluster closely on a highly curved part of the surface and are widely, spread on smooth or fat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not undersampled or underrepresented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a method for reconstructing 3D frontier points, contour generators and surfaces of anatomical objects or smooth surfaces from a small number, e. g. 10, of conventional 2D X-ray images. The X-ray images are taken at different viewing directions with full prior knowledge of the X-ray source and sensor configurations. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the number of viewing directions is fixed and the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The technique may be used not only in medicine but also in industrial applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The River Lugg has particular problems with high sediment loads that have resulted in detrimental impacts on ecology and fisheries. A new dynamic, process-based model of hydrology and sediments (INCA- SED) has been developed and applied to the River Lugg system using an extensive data set from 1995–2008. The model simulates sediment sources and sinks throughout the catchment and gives a good representation of the sediment response at 22 reaches along the River Lugg. A key question considered in using the model is the management of sediment sources so that concentrations and bed loads can be reduced in the river system. Altogether, five sediment management scenarios were selected for testing on the River Lugg, including land use change, contour tillage, hedging and buffer strips. Running the model with parameters altered to simulate these five scenarios produced some interesting results. All scenarios achieved some reduction in sediment levels, with the 40% land use change achieving the best result with a 19% reduction. The other scenarios also achieved significant reductions of between 7% and 9%. Buffer strips produce the best result at close to 9%. The results suggest that if hedge introduction, contour tillage and buffer strips were all applied, sediment reductions would total 24%, considerably improving the current sediment situation. We present a novel cost-effectiveness analysis of our results where we use percentage of land removed from production as our cost function. Given the minimal loss of land associated with contour tillage, hedges and buffer strips, we suggest that these management practices are the most cost-effective combination to reduce sediment loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The usefulness of any simulation of atmospheric tracers using low-resolution winds relies on both the dominance of large spatial scales in the strain and time dependence that results in a cascade in tracer scales. Here, a quantitative study on the accuracy of such tracer studies is made using the contour advection technique. It is shown that, although contour stretching rates are very insensitive to the spatial truncation of the wind field, the displacement errors in filament position are sensitive. A knowledge of displacement characteristics is essential if Lagrangian simulations are to be used for the inference of airmass origin. A quantitative lower estimate is obtained for the tracer scale factor (TSF): the ratio of the smallest resolved scale in the advecting wind field to the smallest “trustworthy” scale in the tracer field. For a baroclinic wave life cycle the TSF = 6.1 ± 0.3 while for the Northern Hemisphere wintertime lower stratosphere the TSF = 5.5 ± 0.5, when using the most stringent definition of the trustworthy scale. The similarity in the TSF for the two flows is striking and an explanation is discussed in terms of the activity of potential vorticity (PV) filaments. Uncertainty in contour initialization is investigated for the stratospheric case. The effect of smoothing initial contours is to introduce a spinup time, after which wind field truncation errors take over from initialization errors (2–3 days). It is also shown that false detail from the proliferation of finescale filaments limits the useful lifetime of such contour advection simulations to 3σ−1 days, where σ is the filament thinning rate, unless filaments narrower than the trustworthy scale are removed by contour surgery. In addition, PV analysis error and diabatic effects are so strong that only PV filaments wider than 50 km are at all believable, even for very high-resolution winds. The minimum wind field resolution required to accurately simulate filaments down to the erosion scale in the stratosphere (given an initial contour) is estimated and the implications for the modeling of atmospheric chemistry are briefly discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experiences from the Mitigation Options for Phosphorus and Sediment (MOPS) projects, which aim to determine the effectiveness of measures to reduce pollutant loading from agricultural land to surface waters, have been used to contribute to the findings of a recent paper (Kay et al., 2009, Agricultural Systems, 99, 67–75), which reviewed the efficacy of contemporary agricultural stewardship measures for ameliorating the water pollution problems of key concern to the UK water industry. MOPS1 is a recently completed 3-year research project on three different soil types in the UK, which focused on mitigation options for winter cereals. MOPS1 demonstrated that tramlines can be the major pathway for sediment and nutrient transfer from arable hillslopes, and that although minimum tillage, crop residue incorporation, contour cultivation, and beetle banks also have potential to be cost-effective mitigation options, tramline management is the one of the most promising treatments for mitigating diffuse pollution losses, as it was able to reduce sediment and nutrient losses by 72–99% in four out of five site years trialled. Using information from the MOPS projects, this paper builds on the findings of Kay et al. to provide an updated picture of the evidence available and the immediate needs for research in this area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This essay explores the ways in which the performance of Jewish identity (in the sense both of representing Jewish characters and of writing about those characters’ conscious and unconscious renditions of their Jewishness) is a particular concern (in both senses of the word) for Lorrie Moore. Tracing Moore's representations of Jewishness over the course of her career, from the early story “The Jewish Hunter” through to her most recent novel, A Gate at the Stairs, I argue that it is characterized by (borrowing a phrase from Moore herself) “performance anxiety,” an anxiety that manifests itself in awkward comedy and that can be read both in biographical terms and as an oblique commentary on, or reworking of, the passing narrative, which I call “anti-passing.” Just as passing narratives complicate conventional ethno-racial definitions so Moore's anti-passing narratives, by representing Jews who represent themselves as other to themselves, as well as to WASP America, destabilize the category of Jewishness and, by implication, deconstruct the very notion of ethnic categorization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

User-generated content (UGC) is attracting a great deal of interest - some of it effective, some misguided. This article reviews the marketing-related factors that gave rise to UGC, tracing the relevant development of market orientation, social interaction, word of mouth, brand relationships, consumer creativity, co-creation, and customization, largely through the pages of the Journal of Advertising Research over the last 40 (or so) of its 50 years. The authors then discuss the characteristic features of UGC and how they differ from (and are similar to) these concepts. The insights thus gained will help practitioners and researchers understand what UGC is (and is not) and how it should (and should not) be used.