965 resultados para Surface conditioning methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project utilized information from ground penetrating radar (GPR) and visual inspection via the pavement profile scanner (PPS) in proof-of-concept trials. GPR tests were carried out on a variety of portland cement concrete pavements and laboratory concrete specimens. Results indicated that the higher frequency GPR antennas were capable of detecting subsurface distress in two of the three pavement sites investigated. However, the GPR systems failed to detect distress in one pavement site that exhibited extensive cracking. Laboratory experiments indicated that moisture conditions in the cracked pavement probably explain the failure. Accurate surveys need to account for moisture in the pavement slab. Importantly, however, once the pavement site exhibits severe surface cracking, there is little need for GPR, which is primarily used to detect distress that is not observed visually. Two visual inspections were also conducted for this study by personnel from Mandli Communications, Inc., and the Iowa Department of Transportation (DOT). The surveys were conducted using an Iowa DOT video log van that Mandli had fitted with additional equipment. The first survey was an extended demonstration of the PPS system. The second survey utilized the PPS with a downward imaging system that provided high-resolution pavement images. Experimental difficulties occurred during both studies; however, enough information was extracted to consider both surveys successful in identifying pavement surface distress. The results obtained from both GPR testing and visual inspections were helpful in identifying sites that exhibited materials-related distress, and both were considered to have passed the proof-of-concept trials. However, neither method can currently diagnose materials-related distress. Both techniques only detected the symptoms of materials-related distress; the actual diagnosis still relied on coring and subsequent petrographic examination. Both technologies are currently in rapid development, and the limitations may be overcome as the technologies advance and mature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Road dust is caused by wind entraining fine material from the roadway surface and the main source of Iowa road dust is attrition of carbonate rock used as aggregate. The mechanisms of dust suppression can be considered as two processes: increasing particle size of the surface fines by agglomeration and inhibiting degradation of the coarse material. Agglomeration may occur by capillary tension in the pore water, surfactants that increase bonding between clay particles, and cements that bind the mineral matter together. Hygroscopic dust suppressants such as calcium chloride have short durations of effectiveness because capillary tension is the primary agglomeration mechanism. Somewhat more permanent methods of agglomeration result from chemicals that cement smaller particles into a mat or larger particles. The cements include lignosulfonates, resins, and asphalt products. The duration of the cements depend on their solubility and the climate. The only dust palliative that decreases aggregate degradation is shredded shingles that act as cushions between aggregate particles. It is likely that synthetic polymers also provide some protection against coarse aggregate attrition. Calcium chloride and lignosulfonates are widely used in Iowa. Both palliatives have a useful duration of about 6 months. Calcium chloride is effective with surface soils of moderate fine content and plasticity whereas lignin works best with materials that have high fine content and high plasticity indices. Bentonite appears to be effective for up to two years and works well with surface materials having low fines and plasticity and works well with limestone aggregate. Selection of appropriate dust suppressants should be based on characterization of the road surface material. Estimation of dosage rates for potential palliatives can be based on data from this report, from technical reports, information from reliable vendors, or laboratory screening tests. The selection should include economic analysis of construction and maintenance costs. The effectiveness of the treatment should be evaluated by any of the field performance measuring techniques discussed in this report. Novel dust control agents that need research for potential application in Iowa include; acidulated soybean oil (soapstock), soybean oil, ground up asphalt shingles, and foamed asphalt. New laboratory evaluation protocols to screen additives for potential effectiveness and determine dosage are needed. A modification of ASTM D 560 to estimate the freeze-thaw and wet-dry durability of Portland cement stabilized soils would be a starting point for improved laboratory testing of dust palliatives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goals of this project were to implement several stabilization methods for preventing or mitigating freeze-thaw damage to granular surfaced roads and identify the most effective and economical methods for the soil and climate conditions of Iowa. Several methods and technologies identified as potentially suitable for Iowa were selected from an extensive analysis of existing literature provided with Iowa Highway Research Board (IHRB) Project TR-632. Using the selected methods, demonstration sections were constructed in Hamilton County on a heavily traveled two-mile section of granular surfaced road that required frequent maintenance during previous thawing periods. Construction procedures and costs of the demonstration sections were documented, and subsequent maintenance requirements were tabulated through two seasonal freeze-thaw periods. Extensive laboratory and field tests were performed prior to construction, as well as before and after the two seasonal freeze-thaw periods, to monitor the performance of the demonstration sections. A weather station was installed at the project site and temperature sensors were embedded in the subgrade to monitor ground temperatures up to a depth of 5 ft and determine the duration and depths of ground freezing and thawing. An economic analysis was performed using the documented construction and maintenance costs, and the estimated cumulative costs per square yard were projected over a 20-year timeframe to determine break-even periods relative to the cost of continuing current maintenance practices. Overall, the sections with biaxial geogrid or macadam base courses had the best observed freeze-thaw performance in this study. These two stabilization methods have larger initial costs and longer break-even periods than aggregate columns, but counties should also weigh the benefits of improved ride quality and savings that these solutions can provide as excellent foundations for future paving or surface upgrades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many transportation agencies maintain grade as an attribute in roadway inventory databases; however, the information is often in an aggregated format. Cross slope is rarely included in large roadway inventories. Accurate methods available to collect grade and cross slope include global positioning systems, traditional surveying, and mobile mapping systems. However, most agencies do not have the resources to utilize these methods to collect grade and cross slope on a large scale. This report discusses the use of LIDAR to extract roadway grade and cross slope for large-scale inventories. Current data collection methods and their advantages and disadvantages are discussed. A pilot study to extract grade and cross slope from a LIDAR data set, including methodology, results, and conclusions, is presented. This report describes the regression methodology used to extract and evaluate the accuracy of grade and cross slope from three dimensional surfaces created from LIDAR data. The use of LIDAR data to extract grade and cross slope on tangent highway segments was evaluated and compared against grade and cross slope collected using an automatic level for 10 test segments along Iowa Highway 1. Grade and cross slope were measured from a surface model created from LIDAR data points collected for the study area. While grade could be estimated to within 1%, study results indicate that cross slope cannot practically be estimated using a LIDAR derived surface model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel numerical algorithm for the simulation of seismic wave propagation in porous media, which is particularly suitable for the accurate modelling of surface wave-type phenomena. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudospectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. The time solver is a splitting algorithm that accounts for the stiffness of the differential equations. Due to the Chebyshev operator the grid spacing in the vertical direction is non-uniform and characterized by a denser spatial sampling in the vicinity of interfaces, which allows for a numerically stable and accurate evaluation of higher order surface wave modes. We stretch the grid in the vertical direction to increase the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristic variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different grid and the two grids are combined through a domain-decomposition method. This wavefield decomposition method accounts for the discontinuity of variables and is crucial for an accurate interface treatment. We simulate seismic wave propagation with open-pore and sealed-pore boundary conditions and verify the validity and accuracy of the algorithm by comparing the numerical simulations to analytical solutions based on zero viscosity obtained with the Cagniard-de Hoop method. Finally, we illustrate the suitability of our algorithm for more complex models of porous media involving viscous pore fluids and strongly heterogeneous distributions of the elastic and hydraulic material properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The water content dynamics in the upper soil surface during evaporation is a key element in land-atmosphere exchanges. Previous experimental studies have suggested that the soil water content increases at the depth of 5 to 15 cm below the soil surface during evapo- ration, while the layer in the immediate vicinity of the soil surface is drying. In this study, the dynamics of water content profiles exposed to solar radiative forcing was monitored at a high temporal resolution using dielectric methods both in the presence and absence of evaporation. A 4-d comparison of reported moisture content in coarse sand in covered and uncovered buckets using a commercial dielectric-based probe (70 MHz ECH2O-5TE, Decagon Devices, Pullman, WA) and the standard 1-GHz time domain reflectometry method. Both sensors reported a positive correlation between temperature and water content in the 5- to 10-cm depth, most pronounced in the morning during heating and in the afternoon during cooling. Such positive correlation might have a physical origin induced by evaporation at the surface and redistribution due to liquid water fluxes resulting from the temperature- gradient dynamics within the sand profile at those depths. Our experimental data suggest that the combined effect of surface evaporation and temperature-gradient dynamics should be considered to analyze experimental soil water profiles. Additional effects related to the frequency of operation and to protocols for temperature compensation of the dielectric sensors may also affect the probes' response during large temperature changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In anticipation of regulation involving numeric turbidity limit at highway construction sites, research was done into the most appropriate, affordable methods for surface water monitoring. Measuring sediment concentration in streams may be conducted a number of ways. As part of a project funded by the Iowa Department of Transportation, several testing methods were explored to determine the most affordable, appropriate methods for data collection both in the field and in the lab. The primary purpose of the research was to determine the exchangeability of the acrylic transparency tube for water clarity analysis as compared to the turbidimeter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tarkoituksena oli testata jo tutkimuskeskuksella käytössä ollutta ja tutkimuskeskukselle tässä työssä kehitettyä pakkauksen vesihöyrytiiveyteen liittyvää mittausmenetelmää. Saatuja tuloksia verrattiin keskenään sekä materiaalista mitattuihin arvoihin. Elintarvikepakkauksia tutkittiin myös kosteussensoreiden, säilyvyyskokeen sekä kuljetussimuloinnin avulla. Optimoinnilla tutkittiin pakkauksen muodon vaikutusta vesihöyrytiiveyteen. Pakkauksen vesihöyrynläpäisyn mittaamiseen kehitetty menetelmä toimi hyvin ja sen toistettavuus oli hyvä. Verrattaessa sitä jo olemassa olleeseen menetelmään tulokseksi saatiin, että uusi menetelmä oli nopeampi ja vaati vähemmän työaikaa, mutta molemmat menetelmät antoivat hyviä arvoja rinnakkaisille näytteille. Kosteussensoreilla voitiin tutkia tyhjän pakkauksen sisällä olevan kosteuden muutoksia säilytyksen aikana. Säilyvyystesti tehtiin muroilla ja parhaan vesihöyrysuojan antoivat pakkaukset joissa oli alumiinilaminaatti- tai metalloitu OPP kerros. Kuljetustestauksen ensimmäisessä testissä pakkauksiin pakattiin muroja ja toisessa testissä nuudeleita. Kuljetussimuloinnilla ei ollutvaikutusta pakkausten sisäpintojen eheyteen eikä siten pakkausten vesihöyrytiiveyteen. Optimoinnilla vertailtiin eri muotoisten pakkausten tilavuus/pinta-ala suhdetta ja vesihöyrytiiveyden riippuvuutta pinta-alasta. Optimaalisimmaksi pakkaukseksi saatiin pallo, jonka pinta-ala oli pienin ja materiaalin sallima vesihöyrynläpäisy suurin ja vesihöyrybarrierin määrä pienin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality inspection and assurance is a veryimportant step when today's products are sold to markets. As products are produced in vast quantities, the interest to automate quality inspection tasks has increased correspondingly. Quality inspection tasks usuallyrequire the detection of deficiencies, defined as irregularities in this thesis. Objects containing regular patterns appear quite frequently on certain industries and science, e.g. half-tone raster patterns in the printing industry, crystal lattice structures in solid state physics and solder joints and components in the electronics industry. In this thesis, the problem of regular patterns and irregularities is described in analytical form and three different detection methods are proposed. All the methods are based on characteristics of Fourier transform to represent regular information compactly. Fourier transform enables the separation of regular and irregular parts of an image but the three methods presented are shown to differ in generality and computational complexity. Need to detect fine and sparse details is common in quality inspection tasks, e.g., locating smallfractures in components in the electronics industry or detecting tearing from paper samples in the printing industry. In this thesis, a general definition of such details is given by defining sufficient statistical properties in the histogram domain. The analytical definition allowsa quantitative comparison of methods designed for detail detection. Based on the definition, the utilisation of existing thresholding methodsis shown to be well motivated. Comparison of thresholding methods shows that minimum error thresholding outperforms other standard methods. The results are successfully applied to a paper printability and runnability inspection setup. Missing dots from a repeating raster pattern are detected from Heliotest strips and small surface defects from IGT picking papers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim of study: To identify species of wood samples based on common names and anatomical analyses of their transversal surfaces (without microscopic preparations). Area of study: Spain and South America Material and methods: The test was carried out on a batch of 15 lumber samples deposited in the Royal Botanical Garden in Madrid, from the expedition by Ruiz and Pavon (1777-1811). The first stage of the methodology is to search and to make a critical analysis of the databases which list common nomenclature along with scientific nomenclature. A geographic filter was then applied to the information resulting from the samples with a more restricted distribution. Finally an anatomical verification was carried out with a pocket microscope with a magnification of x40, equipped with a 50 micrometers resolution scale. Main results: The identification of the wood based exclusively on the common name is not useful due to the high number of alternative possibilities (14 for “naranjo”, 10 for “ébano”, etc.). The common name of one of the samples (“huachapelí mulato”) enabled the geographic origin of the samples to be accurately located to the shipyard area in Guayaquil (Ecuador). Given that Ruiz y Pavon did not travel to Ecuador, the specimens must have been obtained by Tafalla. It was possible to determine correctly 67% of the lumber samples from the batch. In 17% of the cases the methodology did not provide a reliable identification. Research highlights: It was possible to determine correctly 67% of the lumber samples from the batch and their geographic provenance. The identification of the wood based exclusively on the common name is not useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diplomityössä on käsitelty paperin pinnankarkeuden mittausta, joka on keskeisimpiä ongelmia paperimateriaalien tutkimuksessa. Paperiteollisuudessa käytettävät mittausmenetelmät sisältävät monia haittapuolia kuten esimerkiksi epätarkkuus ja yhteensopimattomuus sileiden papereiden mittauksissa, sekä suuret vaatimukset laboratorio-olosuhteille ja menetelmien hitaus. Työssä on tutkittu optiseen sirontaan perustuvia menetelmiä pinnankarkeuden määrittämisessä. Konenäköä ja kuvan-käsittelytekniikoita tutkittiin karkeilla paperipinnoilla. Tutkimuksessa käytetyt algoritmit on tehty Matlab® ohjelmalle. Saadut tulokset osoittavat mahdollisuuden pinnankarkeuden mittaamiseen kuvauksen avulla. Parhaimman tuloksen perinteisen ja kuvausmenetelmän välillä antoi fraktaaliulottuvuuteen perustuva menetelmä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents experimental studies of rare earth (RE) metal induced structures on Si(100) surfaces. Two divalent RE metal adsorbates, Eu and Yb, are investigated on nominally flat Si(100) and on vicinal, stepped Si(100) substrates. Several experimental methods have been applied, including scanning tunneling microscopy/spectroscopy (STM/STS), low energy electron diffraction (LEED), synchrotron radiation photoelectron spectroscopy (SR-PES), Auger electron spectroscopy (AES), thermal desorption spectroscopy (TDS), and work function change measurements (Δφ). Two stages can be distinguished in the initial growth of the RE/Si interface: the formation of a two-dimensional (2D) adsorbed layer at submonolayer coverage and the growth of a three-dimensional (3D) silicide phase at higher coverage. The 2D phase is studied for both adsorbates in order to discover whether they produce common reconstructions or reconstructions common to the other RE metals. For studies of the 3D phase Yb is chosen due to its ability to crystallize in a hexagonal AlB2 type lattice, which is the structure of RE silicide nanowires, therefore allowing for the possibility of the growth of one-dimensional (1D) wires. It is found that despite their similar electronic configuration, Eu and Yb do not form similar 2D reconstructions on Si(100). Instead, a wealth of 2D structures is observed and atomic models are proposed for the 2×3-type reconstructions. In addition, adsorbate induced modifications on surface morphology and orientational symmetry are observed. The formation of the Yb silicide phase follows the Stranski-Krastanov growth mode. Nanowires with the hexagonal lattice are observed on the flat Si(100) substrate, and moreover, an unexpectedly large variety of growth directions are revealed. On the vicinal substrate the growth of the silicide phase as 3D islands and wires depends drastically on the growth conditions. The conditions under which wires with high aspect ratio and single orientation parallel to the step edges can be formed are demonstrated.