999 resultados para SURFACE-EMITTING-LASERS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In May 1999, the European Space Agency (ESA) selected the Earth Explorer Opportunity Soil Moisture and Ocean Salinity (SMOS) mission to obtain global and frequent soil moisture and ocean salinity maps. SMOS' single payload is the Microwave Imaging Radiometer by Aperture Synthesis (MIRAS), an L-band two-dimensional aperture synthesis radiometer with multiangular observation capabilities. At L-band, the brightness temperature sensitivity to the sea surface salinity (SSS) is low, approximately 0.5 K/psu at 20/spl deg/C, decreasing to 0.25 K/psu at 0/spl deg/C, comparable to that to the wind speed /spl sim/0.2 K/(m/s) at nadir. However, at a given time, the sea state does not depend only on local winds, but on the local wind history and the presence of waves traveling from far distances. The Wind and Salinity Experiment (WISE) 2000 and 2001 campaigns were sponsored by ESA to determine the impact of oceanographic and atmospheric variables on the L-band brightness temperature at vertical and horizontal polarizations. This paper presents the results of the analysis of three nonstationary sea state conditions: growing and decreasing sea, and the presence of swell. Measured sea surface spectra are compared with the theoretical ones, computed using the instantaneous wind speed. Differences can be minimized using an "effective wind speed" that makes the theoretical spectrum best match the measured one. The impact on the predicted brightness temperatures is then assessed using the small slope approximation/small perturbation method (SSA/SPM).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a model of the Stokes emission vector from the ocean surface. The ocean surface is described as an ensemble of facets with Cox and Munk's (1954) Gram-Charlier slope distribution. The study discusses the impact of different up-wind and cross-wind rms slopes, skewness, peakedness, foam cover models and atmospheric effects on the azimuthal variation of the Stokes vector, as well as the limitations of the model. Simulation results compare favorably, both in mean value and azimuthal dependence, with SSM/I data at 53° incidence angle and with JPL's WINDRAD measurements at incidence angles from 30° to 65°, and at wind speeds from 2.5 to 11 m/s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recently developed technique, polarimetric radar interferometry, is applied to tackle the problem of the detection of buried objects embedded in surface clutter. An experiment with a fully polarimetric radar in an anechoic chamber has been carried out using different frequency bands and baselines. The processed results show the ability of this technique to detect buried plastic mines and to measure their depth. This technique enables the detection of plastic mines even if their backscatter response is much lower than that of the surface clutter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thin disk and fiber lasers are new solid-state laser technologies that offer a combinationof high beam quality and a wavelength that is easily absorbed by metal surfacesand are expected to challenge the CO2 and Nd:YAG lasers in cutting of metals ofthick sections (thickness greater than 2mm). This thesis studied the potential of the disk and fiber lasers for cutting applications and the benefits of their better beam quality. The literature review covered the principles of the disk laser, high power fiber laser, CO2 laser and Nd:YAG laser as well as the principle of laser cutting. The cutting experiments were made with thedisk, fiber and CO2 lasers using nitrogen as an assist gas. The test material was austenitic stainless steel of sheet thickness 1.3mm, 2.3mm, 4.3mm and 6.2mm for the disk and fiber laser cutting experiments and sheet thickness of 1.3mm, 1.85mm, 4.4mm and 6.4mm for the CO2 laser cutting experiments. The experiments focused on the maximum cutting speeds with appropriate cut quality. Kerf width, cutedge perpendicularity and surface roughness were the cut characteristics used to analyze the cut quality. Attempts were made to draw conclusions on the influence of high beam quality on the cutting speed and cut quality. The cutting speeds were enormous for the disk and fiber laser cutting experiments with the 1.3mm and 2.3mm sheet thickness and the cut quality was good. The disk and fiber laser cutting speeds were lower at 4.3mm and 6.2mm sheet thickness but there was still a considerable percentage increase in cutting speeds compared to the CO2 laser cutting speeds at similar sheet thickness. However, the cut quality for 6.2mm thickness was not very good for the disk and fiber laser cutting experiments but could probably be improved by proper selection of cutting parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Paperin pinnan karheus on yksi paperin laatukriteereistä. Sitä mitataan fyysisestipaperin pintaa mittaavien laitteiden ja optisten laitteiden avulla. Mittaukset vaativat laboratorioolosuhteita, mutta nopeammille, suoraan linjalla tapahtuville mittauksilla olisi tarvetta paperiteollisuudessa. Paperin pinnan karheus voidaan ilmaista yhtenä näytteelle kohdistuvana karheusarvona. Tässä työssä näyte on jaettu merkitseviin alueisiin, ja jokaiselle alueelle on laskettu erillinen karheusarvo. Karheuden mittaukseen on käytetty useita menetelmiä. Yleisesti hyväksyttyä tilastollista menetelmää on käytetty tässä työssä etäisyysmuunnoksen lisäksi. Paperin pinnan karheudenmittauksessa on ollut tarvetta jakaa analysoitava näyte karheuden perusteella alueisiin. Aluejaon avulla voidaan rajata näytteestä selvästi karheampana esiintyvät alueet. Etäisyysmuunnos tuottaa alueita, joita on analysoitu. Näistä alueista on muodostettu yhtenäisiä alueita erilaisilla segmentointimenetelmillä. PNN -menetelmään (Pairwise Nearest Neighbor) ja naapurialueiden yhdistämiseen perustuvia algoritmeja on käytetty.Alueiden jakamiseen ja yhdistämiseen perustuvaa lähestymistapaa on myös tarkasteltu. Segmentoitujen kuvien validointi on yleensä tapahtunut ihmisen tarkastelemana. Tämän työn lähestymistapa on verrata yleisesti hyväksyttyä tilastollista menetelmää segmentoinnin tuloksiin. Korkea korrelaatio näiden tulosten välillä osoittaa onnistunutta segmentointia. Eri kokeiden tuloksia on verrattu keskenään hypoteesin testauksella. Työssä on analysoitu kahta näytesarjaa, joidenmittaukset on suoritettu OptiTopolla ja profilometrillä. Etäisyysmuunnoksen aloitusparametrit, joita muutettiin kokeiden aikana, olivat aloituspisteiden määrä ja sijainti. Samat parametrimuutokset tehtiin kaikille algoritmeille, joita käytettiin alueiden yhdistämiseen. Etäisyysmuunnoksen jälkeen korrelaatio oli voimakkaampaa profilometrillä mitatuille näytteille kuin OptiTopolla mitatuille näytteille. Segmentoiduilla OptiTopo -näytteillä korrelaatio parantui voimakkaammin kuin profilometrinäytteillä. PNN -menetelmän tuottamilla tuloksilla korrelaatio oli paras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fiber laser for materials processing have undergone a rapid development in the pastseveral years. As fiber laser provides a combination of high beam quality and awavelength that is easily absorbed by metal surfaces, the named future laser isexpected to challenge the CO2 and Nd:YAG lasers in the area of metal cutting. This thesis studied the performance of fiber laser cutting mild steel. In the literature review part, it introduced the laser cutting principle and the principle of fiber laser including the newest development of fiber laser cuttingtechnology. Because the fiber laser cutting mild steel is a very young technology, a preliminary test was made in order to investigate effect of the cutting parameters on cut quality. Then the formal fiber laser cutting experiment was madeby using 3 mm thickness S355 steel with oxygen as assistant gas. The experimentwas focused on the cut quality with maximum cutting speed and minimum oxygen gas pressure. And the cut quality is mainly decided by the kerf width, perpendicularity tolerance, surface roughness and striation patterns. After analysis the cutting result, several conclusions were made. Although the best result got in the experiment is not perfect as predicted, the whole result of the test can be accepted. Compared with CO2 laser, a higher cutting speed was achieved by fiber laser with very low oxygen gas pressure. A further improvement about the cutting quality might be possible by proper selection of process parameters. And in order to investigate the cutting performance more clearly, a future study about cutting different thickness mild steel and different shape was recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage-gated sodium channels (Navs) are glycoproteins composed of a pore-forming α-subunit and associated β-subunits that regulate Nav α-subunit plasma membrane density and biophysical properties. Glycosylation of the Nav α-subunit also directly affects Navs gating. β-subunits and glycosylation thus comodulate Nav α-subunit gating. We hypothesized that β-subunits could directly influence α-subunit glycosylation. Whole-cell patch clamp of HEK293 cells revealed that both β1- and β3-subunits coexpression shifted V ½ of steady-state activation and inactivation and increased Nav1.7-mediated I Na density. Biotinylation of cell surface proteins, combined with the use of deglycosydases, confirmed that Nav1.7 α-subunits exist in multiple glycosylated states. The α-subunit intracellular fraction was found in a core-glycosylated state, migrating at ~250 kDa. At the plasma membrane, in addition to the core-glycosylated form, a fully glycosylated form of Nav1.7 (~280 kDa) was observed. This higher band shifted to an intermediate band (~260 kDa) when β1-subunits were coexpressed, suggesting that the β1-subunit promotes an alternative glycosylated form of Nav1.7. Furthermore, the β1-subunit increased the expression of this alternative glycosylated form and the β3-subunit increased the expression of the core-glycosylated form of Nav1.7. This study describes a novel role for β1- and β3-subunits in the modulation of Nav1.7 α-subunit glycosylation and cell surface expression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quality inspection and assurance is a veryimportant step when today's products are sold to markets. As products are produced in vast quantities, the interest to automate quality inspection tasks has increased correspondingly. Quality inspection tasks usuallyrequire the detection of deficiencies, defined as irregularities in this thesis. Objects containing regular patterns appear quite frequently on certain industries and science, e.g. half-tone raster patterns in the printing industry, crystal lattice structures in solid state physics and solder joints and components in the electronics industry. In this thesis, the problem of regular patterns and irregularities is described in analytical form and three different detection methods are proposed. All the methods are based on characteristics of Fourier transform to represent regular information compactly. Fourier transform enables the separation of regular and irregular parts of an image but the three methods presented are shown to differ in generality and computational complexity. Need to detect fine and sparse details is common in quality inspection tasks, e.g., locating smallfractures in components in the electronics industry or detecting tearing from paper samples in the printing industry. In this thesis, a general definition of such details is given by defining sufficient statistical properties in the histogram domain. The analytical definition allowsa quantitative comparison of methods designed for detail detection. Based on the definition, the utilisation of existing thresholding methodsis shown to be well motivated. Comparison of thresholding methods shows that minimum error thresholding outperforms other standard methods. The results are successfully applied to a paper printability and runnability inspection setup. Missing dots from a repeating raster pattern are detected from Heliotest strips and small surface defects from IGT picking papers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to introduce the emerging non-contacting spray coating process and compare it to the existing coating techniques. Particular emphasis was given to the details of the spraying process of paper coating colour and the base paper requirements set by the new coating method. Spraying technology itself is nothing new, but the atomisation process of paper coating colour is quite unknown to the paper industry. The differences between the rheology of painting and coating colours make it very difficult to utilise the existing information from spray painting research. Based on the trials, some basic conclusion can be made:The results of this study suggest that the Brookfield viscosity of spray coating colour should be as low as possible, presently a 50 mPas level is regarded as an optimum. For the paper quality and coater runnability, the solids level should be as high as possible. However, the graininess of coated paper surface and the nozzle wear limits the maximum solids level to 60 % at the moment. Most likelydue to the low solids and low viscosity of the coating colour the low shear Brookfield viscosity correlates very well with the paper and spray fan qualities. High shear viscosity is also important, but yet less significant than the low shear viscosity. Droplet size should be minimized and besides keeping the brrokfield viscosity low that can be helped by using a surfactant or dispersing agent in the coating colour formula. Increasing the spraying pressure in the nozzle can also reduce the droplet size. The small droplet size also improves the coating coverage, since there is hardly any levelling taking place after the impact with the base paper. Because of the lack of shear forces after the application, the pigment particles do not orientate along the paper surface. Therefore the study indicates that based on the present know-how, no quality improvements can be obtained by the use of platy type of pigments. The other disadvantage of them is the rapid deterioration of the nozzle lifetime. Further research in both coating colour rheology and nozzle design may change this in the future, but so far only round shape pigments, like typically calcium carbonate is, can be used with spray coating. The low water retention characteristics of spray coating, enhanced by the low solids and low viscosity, challenge the base paper absorption properties.Filler level has to be low not to increase the number of small pores, which have a great influence on the absorption properties of the base paper. Hydrophobic sizing reduces this absorption and prevents binder migration efficiently. High surface roughness and especially poor formation of the base paper deteriorate thespray coated paper properties. However, pre-calendering of the base paper does not contribute anything to the finished paper quality, at least at the coating colour solids level below 60 %. When targeting a standard offset LWC grade, spraycoating produces similar quality to film coating, but yet blade coating being on a slightly better level. However, because of the savings in both investment and production costs, spray coating may have an excellent future ahead. The porousnature of the spray coated surface offers an optimum substrate for the coldset printing industry to utilise the potential of high quality papers in their business.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis considers nondestructive optical methods for metal surface and subsurface inspection. The main purpose of this thesis was to study some optical methods in order to find out their applicability to industrial measurements. In laboratory testing the simplest light scattering approach, measurement of specular reflectance, was used for surface roughness evaluation. Surface roughness, curvature and finishing process of metal sheets were determined by specular reflectance measurements. Using a fixed angleof incidence, the specular reflectance method might be automated for industrialinspection. For defect detection holographic interferometry and thermography were compared. Using either holographic interferometry or thermography, relativelysmall-size defects in metal plates could be revealed. Holographic techniques have some limitations for industrial measurements. On the contrary, thermography has excellent prospects for on-line inspection, especially with scanning techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimization of most pesticide and fertilizer applications is based on overall grove conditions. In this work we measurements. Recently, Wei [9, 10] used a terrestrial propose a measurement system based on a ground laser scanner to LIDAR to measure tree height, width and volume developing estimate the volume of the trees and then extrapolate their foliage a set of experiments to evaluate the repeatability and surface in real-time. Tests with pear trees demonstrated that the accuracy of the measurements, obtaining a coefficient of relation between the volume and the foliage can be interpreted as variation of 5.4% and a relative error of 4.4% in the linear with a coefficient of correlation (R) of 0.81 and the foliar estimation of the volume but without real-time capabilities. surface can be estimated with an average error less than 5 %.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Different anchoring groups have been studied with the aim of covalently binding organic linkers to the surface of alumina ceramic foams. The results suggested that a higher degree of functionalization was achieved with a pyrogallol derivative - as compared to its catechol analogue - based on the XPS analysis of the ceramic surface. The conjugation of organic ligands to the surface of these alumina materials was corroborated by DNP-MAS NMR measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study uses digital elevation models and ground-penetrating radar to quantify the relation between the surface morphodynamics and subsurface sedimentology in the sandy braided South Saskatchewan River, Canada. A unique aspect of the methodology is that both digital elevation model and ground-penetrating radar data were collected from the same locations in 2004, 2005, 2006 and 2007, thus enabling the surface morphodynamics to be tied explicitly to the associated evolving depositional product. The occurrence of a large flood in 2005 also allowed the influence of discharge to be assessed with respect to the processproduct relationship. The data demonstrate that the morphology of the study reach evolved even during modest discharges, but more extensive erosion was caused by the large flood. In addition, the study reach was dominated by compound bars before the flood, but switched to being dominated by unit bars during and after the flood. The extent to which the subsurface deposits (the product') were modified by the surface morphodynamics (the process') was quantified using the changes in radar-facies recorded in sequential ground-penetrating radar surveys. These surveys reveal that during the large flood there was an increase in the proportion of facies associated with bar margin accretion and larger dunes. In subsequent years, these facies became truncated and replaced with facies associated with smaller dune sets. This analysis shows that unit bars generally become truncated more laterally than vertically and, thus, they lose the high-angle bar margin deposits and smaller scale bar-top deposits. In general, the only fragments that remain of the unit bars are dune sets, thus making identification of the original unit barform problematic. This novel data set has implications for what may ultimately become preserved in the rock record.