536 resultados para Sliding
Resumo:
The present work analyzed the tribological behavior of coatings/surface modifications traditionally used in cold rolling mill rolls and new coatings/surface modificationswith potential to replace the carcinogenic hard chrome. The study started with identification of wear mechanisms occurring in real coldrollingmill rolls. Due the high cost and dimensions of the rolls, thereplication technique was used. Replicas were obtained from 4 different rolling millBrazilian companies before and after a normal rolling campaign. Initial sliding tests were conducted using spherical and cylindrical counter bodies in order to verifywhichtribological conditions allowed to reproduce the wear mechanisms found in the replicas. These tests indicated the use of reciprocating sliding tests with cylindrical counter bodies (line contact), normal load of 100 N, and test times of and 1 h and 5 h. Different surface modifications were carried out on samples produced from a fragment of a rolling mill roll. The specimens were heat treated and ground on both sides. After, some specimens were surface textured by electrical discharge texturing (EDT). For both groups (ground and EDT), subsequent treatments of chromium plating, electroless NiP coating and plasma nitriding were carried out. The results of the reciprocating tests showed that specimens with electroless NiP coating presented the lowest friction coefficients, while plasma nitrided specimens showed the highest. In general, previous surface texturing before the coating/surface modification increased the wear of the counter bodies. Oneexceptionwas for EDT with subsequent electroless NiP coating, which presented the lowest counter bodies wear rate. The samples withelectroless NiP coating promoted a tribolayer consisting of Nickel, Phosphorus and Oxygen on both the specimens andthecounter bodies, which was apparently responsible for the reduction of friction coefficient and wear rate. The increase of the test time reduced the wear rate of the samples, apparently due the stability of the tribolayers formed, except for the nitrided samples. For the textured specimens, NiP coating showed the best performance in maintaining the surface topography of the specimens after the sliding tests.
Resumo:
The general aim of this study was to evaluate the conical interface of pilar/implant. The specific aims were to evaluate the influence of hexagonal internal index in the microleakage and mechanical strength of Morse taper implants; the effect of axial loading on the deformation in cervical region of Morse taper implants of different diameters through strain gauge; the effect of axial loading in cervical deformation and sliding of abutment into the implant by tridimensional measurements; the integrity of conical interface before and after dynamic loading by microscopy and microleakage; and the stress distribution in tridimensional finite element models of Morse taper implants assembled with 2 pieces abutment. According to the obtained results, could be concluded that the diameter had influence in the cervical deformation of Morse taper implants; the presence of internal hexagonal index in the end of internal cone of implant didn´t influenced the bacterial microleakage under static loading neither reduced the mechanical strength of implants; one million cycles of vertical and off-center load had no negative influence in Morse taper implant integrity.
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.
Resumo:
Highlights of Data Expedition: • Students explored daily observations of local climate data spanning the past 35 years. • Topological Data Analysis, or TDA for short, provides cutting-edge tools for studying the geometry of data in arbitrarily high dimensions. • Using TDA tools, students discovered intrinsic dynamical features of the data and learned how to quantify periodic phenomenon in a time-series. • Since nature invariably produces noisy data which rarely has exact periodicity, students also considered the theoretical basis of almost-periodicity and even invented and tested new mathematical definitions of almost-periodic functions. Summary The dataset we used for this data expedition comes from the Global Historical Climatology Network. “GHCN (Global Historical Climatology Network)-Daily is an integrated database of daily climate summaries from land surface stations across the globe.” Source: https://www.ncdc.noaa.gov/oa/climate/ghcn-daily/ We focused on the daily maximum and minimum temperatures from January 1, 1980 to April 1, 2015 collected from RDU International Airport. Through a guided series of exercises designed to be performed in Matlab, students explore these time-series, initially by direct visualization and basic statistical techniques. Then students are guided through a special sliding-window construction which transforms a time-series into a high-dimensional geometric curve. These high-dimensional curves can be visualized by projecting down to lower dimensions as in the figure below (Figure 1), however, our focus here was to use persistent homology to directly study the high-dimensional embedding. The shape of these curves has meaningful information but how one describes the “shape” of data depends on which scale the data is being considered. However, choosing the appropriate scale is rarely an obvious choice. Persistent homology overcomes this obstacle by allowing us to quantitatively study geometric features of the data across multiple-scales. Through this data expedition, students are introduced to numerically computing persistent homology using the rips collapse algorithm and interpreting the results. In the specific context of sliding-window constructions, 1-dimensional persistent homology can reveal the nature of periodic structure in the original data. I created a special technique to study how these high-dimensional sliding-window curves form loops in order to quantify the periodicity. Students are guided through this construction and learn how to visualize and interpret this information. Climate data is extremely complex (as anyone who has suffered from a bad weather prediction can attest) and numerous variables play a role in determining our daily weather and temperatures. This complexity coupled with imperfections of measuring devices results in very noisy data. This causes the annual seasonal periodicity to be far from exact. To this end, I have students explore existing theoretical notions of almost-periodicity and test it on the data. They find that some existing definitions are also inadequate in this context. Hence I challenged them to invent new mathematics by proposing and testing their own definition. These students rose to the challenge and suggested a number of creative definitions. While autocorrelation and spectral methods based on Fourier analysis are often used to explore periodicity, the construction here provides an alternative paradigm to quantify periodic structure in almost-periodic signals using tools from topological data analysis.
Resumo:
Intriguing lattice dynamics has been predicted for aperiodic crystals that contain incommensurate substructures. Here we report inelastic neutron scattering measurements of phonon and magnon dispersions in Sr14Cu24O41, which contains incommensurate one-dimensional (1D) chain and two-dimensional (2D) ladder substructures. Two distinct acoustic phonon-like modes, corresponding to the sliding motion of one sublattice against the other, are observed for atomic motions polarized along the incommensurate axis. In the long wavelength limit, it is found that the sliding mode shows a remarkably small energy gap of 1.7-1.9 meV, indicating very weak interactions between the two incommensurate sublattices. The measurements also reveal a gapped and steep linear magnon dispersion of the ladder sublattice. The high group velocity of this magnon branch and weak coupling with acoustic phonons can explain the large magnon thermal conductivity in Sr14Cu24O41 crystals. In addition, the magnon specific heat is determined from the measured total specific heat and phonon density of states, and exhibits a Schottky anomaly due to gapped magnon modes of the spin chains. These findings offer new insights into the phonon and magnon dynamics and thermal transport properties of incommensurate magnetic crystals that contain low-dimensional substructures.
Resumo:
The basement membrane (BM) is a highly conserved form of extracellular matrix that underlies or surrounds and supports most animal tissues. BMs are crossed by cells during various remodeling events in development, immune surveillance, or during cancer metastasis. Because BMs are dense and not easily penetrable, most of these cells must open a gap in order to facilitate their migration. The mechanisms by which cells execute these changes are poorly understood. A developmental event that requires the opening of a BM gap is C. elegans uterine-vulval connection. The anchor cell (AC), a specialized uterine cell, creates a de novo BM gap. Subsequent widening of the BM gap involves the underlying vulval precursor cells (VPCs) and the π cells, uterine neighbors of the AC through non-proteolytic BM sliding. Using forward and reverse genetic screening, transcriptome profiling, and live-cell imaging, I investigated how the cells in these tissues accomplish BM gap formation. In Chapter 2, I identify two potentially novel regulators of BM breaching, isolated through a large-scale forward genetic screen and characterize the invasion defect in these mutants. In Chapter 3, I describe single-cell transcriptome sequencing of the invasive AC. In Chapter 4, I describe the role of the π cells in opening the nascent BM gap. A complete developmental pathway for this process has been elucidated: the AC induces the π fate through Notch signaling, after which the π cells upregulate the Sec14 family protein CTG-1, which in turn restricts the trafficking of DGN-1 (dystroglycan), a laminin receptor, allowing the BM to slide. Chapter 5 outlines the implications of these discoveries.
Resumo:
The second messenger c-di-GMP is implicated in regulation of various aspects of the lifestyles and virulence of Gram-negative bacteria. Cyclic di-GMP is formed by diguanylate cyclases with a GGDEF domain and degraded by phosphodiesterases with either an EAL or HD-GYP domain. Proteins with tandem GGDEF-EAL domains occur in many bacteria, where they may be involved in c-di-GMP turnover or act as enzymatically-inactive c-di-GMP effectors. Here, we report a systematic study of the regulatory action of the eleven GGDEF-EAL proteins in Xanthomonas oryzae pv. oryzicola, an important rice pathogen causing bacterial leaf streak. Mutational analysis revealed that XOC_2335 and XOC_2393 positively regulate bacterial swimming motility, while XOC_2102, XOC_2393 and XOC_4190 negatively control sliding motility. The ΔXOC_2335/XOC_2393 mutant that had a higher intracellular c-di-GMP level than the wild type and the ΔXOC_4190 mutant exhibited reduced virulence to rice after pressure inoculation. In vitro purified XOC_4190 and XOC_2102 have little or no diguanylate cyclase or phosphodiesterase activity, which is consistent with unaltered c-di-GMP concentration in ΔXOC_4190. Nevertheless, both proteins can bind to c-di-GMP with high affinity, indicating a potential role as c-di-GMP effectors. Overall our findings advance understanding of c-di-GMP signaling and its links to virulence in an important rice pathogen.
Resumo:
The combination of permafrost history and dynamics, lake level changes and the tectonical framework is considered to play a crucial role for sediment delivery to El'gygytgyn Crater Lake, NE Russian Arctic. The purpose of this study is to propose a depositional framework based on analyses of the core strata from the lake margin and historical reconstructions from various studies at the site. A sedimentological program has been conducted using frozen core samples from the 141.5 m long El'gygytgyn 5011-3 permafrost well. The drill site is located in sedimentary permafrost west of the lake that partly fills the El'gygytgyn Crater. The total core sequence is interpreted as strata building up a progradational alluvial fan delta. Four macroscopically distinct sedimentary units are identified. Unit 1 (141.5-117.0 m) is comprised of ice-cemented, matrix-supported sandy gravel and intercalated sandy layers. Sandy layers represent sediments which rained out as particles in the deeper part of the water column under highly energetic conditions. Unit 2 (117.0-24.25 m) is dominated by ice-cemented, matrix-supported sandy gravel with individual gravel layers. Most of the Unit 2 diamicton is understood to result from alluvial wash and subsequent gravitational sliding of coarse-grained (sandy gravel) material on the basin slope. Unit 3 (24.25-8.5 m) has ice-cemented, matrix-supported sandy gravel that is interrupted by sand beds. These sandy beds are associated with flooding events and represent near-shore sandy shoals. Unit 4 (8.5-0.0 m) is ice-cemented, matrix-supported sandy gravel with varying ice content, mostly higher than below. It consists of slope material and creek fill deposits. The uppermost metre is the active layer (i.e. the top layer of soil with seasonal freeze and thaw) into which modern soil organic matter has been incorporated. The nature of the progradational sediment transport taking place from the western and northern crater margins may be related to the complementary occurrence of frequent turbiditic layers in the central lake basin, as is known from the lake sediment record. Slope processes such as gravitational sliding and sheet flooding occur especially during spring melt and promote mass wasting into the basin. Tectonics are inferred to have initiated the fan accumulation in the first place and possibly the off-centre displacement of the crater lake.
Resumo:
Previous studies about the strength of the lithosphere in the Iberia centre fail to resolve the depth of earthquakes because of the rheological uncertainties. Therefore, new contributions are considered (the crustal structure from a density model) and several parameters (tectonic regime, mantle rheology, strain rate) are checked in this paper to properly examine the role of lithospheric strength in the intraplate seismicity and the Cenozoic evolution. The strength distribution with depth, the integrated strength, the effective elastic thickness and the seismogenic thickness have been calculated by a finite element modelling of the lithosphere across the Central System mountain range and the bordering Duero and Madrid sedimentary basins. Only a dry mantle under strike-slip/extension and a strain rate of 10-15 s-1, or under extension and 10-16 s-1, causes a strong lithosphere. The integrated strength and the elastic thickness are lower in the mountain chain than in the basins. These anisotropies have been maintained since the Cenozoic and determine the mountain uplift and the biharmonic folding of the Iberian lithosphere during the Alpine deformations. The seismogenic thickness bounds the seismic activity in the upper–middle crust, and the decreasing crustal strength from the Duero Basin towards the Madrid Basin is related to a parallel increase in Plio–Quaternary deformations and seismicity. However, elasto–plastic modelling shows that current African–Eurasian convergence is resolved elastically or ductilely, which accounts for the low seismicity recorded in this region.
Resumo:
Several landforms found in the fold-and-thrust belt area of Central Precordillera, Pre-Andes of Argentina, which were often associated with tectonic efforts, are in fact related to non-tectonic processes or gravitational superficial structures. These second-order structures, interpreted as gravitational collapse structures, have developed in the western flank of sierras de La Dehesa and Talacasto. These include rock-slides, rock falls, wrinkle folds, slip sheets and flaps, among others; which together constitute a monoclinal fold dipping between 30º and 60º to the west. Gravity collapse structures are parallel to the regional strike of the Sierra de la Dehesa and are placed in Ordovician limestones and dolomites. Their sloping towards the west, the presence of bed planes, fractures and joints; and the lithology (limestone interbedded with incompetent argillaceous banks) would have favored their occurrence. Movement of the detached structures has been controlled by lithology characteristics, as well as by bedding and joints. Detachment and initial transport of gravity collapse structures and rockslides in the western flank of the Sierra de la Dehesa were tightly controlled by three structural elements: 1) sliding surfaces developed on parallel bedded strata when dipping >30° in the slope direction; 2) Joint’s sets constitute lateral and transverse traction cracks which release extensional stresses and 3) Discontinuities fragmenting sliding surfaces. Some other factors that could be characterized as local (lithology, structure and topography) and as regional (high seismic activity and possibly wetter conditions during the postglacial period) were determining in favoring the steady loss of the western mountain side in the easternmost foothills of Central Precordillera.
Resumo:
Networked learning happens naturally within the social systems of which we are all part. However, in certain circumstances individuals may want to actively take initiative to initiate interaction with others they are not yet regularly in exchange with. This may be the case when external influences and societal changes require innovation of existing practices. This paper proposes a framework with relevant dimensions providing insight into precipitated characteristics of designed as well as ‘fostered or grown’ networked learning initiatives. Networked learning initiatives are characterized as “goal-directed, interest-, or needs based activities of a group of (at least three) individuals that initiate interaction across the boundaries of their regular social systems”. The proposed framework is based on two existing research traditions, namely 'networked learning' and 'learning networks', comparing, integrating and building upon knowledge from both perspectives. We uncover some interesting differences between definitions, but also similarities in the way they describe what ‘networked’ means and how learning is conceptualized. We think it is productive to combine both research perspectives, since they both study the process of learning in networks extensively, albeit from different points of view, and their combination can provide valuable insights in networked learning initiatives. We uncover important features of networked learning initiatives, characterize actors and connections of which they are comprised and conditions which facilitate and support them. The resulting framework could be used both for analytic purposes and (partly) as a design framework. In this framework it is acknowledged that not all successful networks have the same characteristics: there is no standard ‘constellation’ of people, roles, rules, tools and artefacts, although there are indications that some network structures work better than others. Interactions of individuals can only be designed and fostered till a certain degree: the type of network and its ‘growth’ (e.g. in terms of the quantity of people involved, or the quality and relevance of co-created concepts, ideas, artefacts and solutions to its ‘inhabitants’) is in the hand of the people involved. Therefore, the framework consists of dimensions on a sliding scale. It introduces a structured and analytic way to look at the precipitation of networked learning initiatives: learning networks. Successive research on the application of this framework and feedback from the networked learning community is needed to further validate it’s usability and value to both research as well as practice.
Resumo:
Networked learning happens naturally within the social systems of which we are all part. However, in certain circumstances individuals may want to actively take initiative to initiate interaction with others they are not yet regularly in exchange with. This may be the case when external influences and societal changes require innovation of existing practices. This paper proposes a framework with relevant dimensions providing insight into precipitated characteristics of designed as well as ‘fostered or grown’ networked learning initiatives. Networked learning initiatives are characterized as “goal-directed, interest-, or needs based activities of a group of (at least three) individuals that initiate interaction across the boundaries of their regular social systems”. The proposed framework is based on two existing research traditions, namely 'networked learning' and 'learning networks', comparing, integrating and building upon knowledge from both perspectives. We uncover some interesting differences between definitions, but also similarities in the way they describe what ‘networked’ means and how learning is conceptualized. We think it is productive to combine both research perspectives, since they both study the process of learning in networks extensively, albeit from different points of view, and their combination can provide valuable insights in networked learning initiatives. We uncover important features of networked learning initiatives, characterize actors and connections of which they are comprised and conditions which facilitate and support them. The resulting framework could be used both for analytic purposes and (partly) as a design framework. In this framework it is acknowledged that not all successful networks have the same characteristics: there is no standard ‘constellation’ of people, roles, rules, tools and artefacts, although there are indications that some network structures work better than others. Interactions of individuals can only be designed and fostered till a certain degree: the type of network and its ‘growth’ (e.g. in terms of the quantity of people involved, or the quality and relevance of co-created concepts, ideas, artefacts and solutions to its ‘inhabitants’) is in the hand of the people involved. Therefore, the framework consists of dimensions on a sliding scale. It introduces a structured and analytic way to look at the precipitation of networked learning initiatives: learning networks. Successive research on the application of this framework and feedback from the networked learning community is needed to further validate it’s usability and value to both research as well as practice.
Resumo:
A theory was developed to allow the separate determination of the effects of the interparticle friction and interlocking of particles on the shearing resistance and deformational behavior of granular materials. The derived parameter, angle of solid friction, is independent of the type of shear test, stress history, porosity and the level of confining pressure, and depends solely upon the nature of the particle surface. The theory was tested against published data concerning the performance of plane strain, triaxial compression and extension tests on cohesionless soils. The theory also was applied to isotropically consolidated undrained triaxial tests on three crushed limestones prepared by the authors using vibratory compaction. The authors concluded that, (1) the theory allowed the determination of solid friction between particles which was found to depend solely on the nature of the particle surface, (2) the separation of frictional and volume change components of shear strength of granular materials qualitatively corroborated the postulated mechanism of deformation (sliding and rolling of groups of particles over other similar groups with resulting dilatancy of specimen), (3) the influence of void ratio, gradation confining pressure, stress history and type of shear test on shear strength is reflected in values of the omega parameter, and (4) calculation of the coefficient of solid friction allows the establishment of the lower limit of the shear strength of a granular material.
Resumo:
In the past, many papers have been presented which show that the coating of cutting tools often yields decreased wear rates and reduced coefficients of friction. Although different theories are proposed, covering areas such as hardness theory, diffusion barrier theory, thermal barrier theory, and reduced friction theory, most have not dealt with the question of how and why the coating of tool substrates with hard materials such as Titanium Nitride (TiN), Titanium Carbide (TiC) and Aluminium Oxide (Al203) transforms the performance and life of cutting tools. This project discusses the complex interrelationship that encompasses the thermal barrier function and the relatively low sliding friction coefficient of TiN on an undulating tool surface, and presents the result of an investigation into the cutting characteristics and performance of EDMed surface-modified carbide cutting tool inserts. The tool inserts were coated with TiN by the physical vapour deposition (PVD) method. PVD coating is also known as Ion-plating which is the general term of the coating method in which the film is created by attracting ionized metal vapour in this the metal was Titanium and ionized gas onto negatively biased substrate surface. Coating by PVD was chosen because it is done at a temperature of not more than 5000C whereas chemical Vapour Deposition CVD process is done at very high temperature of about 8500C and in two stages of heating up the substrates. The high temperatures involved in CVD affects the strength of the (tool) substrates. In this study, comparative cutting tests using TiN-coated control specimens with no EDM surface structures and TiN-coated EDMed tools with a crater-like surface topography were carried out on mild steel grade EN-3. Various cutting speeds were investigated, up to an increase of 40% of the tool manufacturer’s recommended speed. Fifteen minutes of cutting were carried out for each insert at the speeds investigated. Conventional tool inserts normally have a tool life of approximately 15 minutes of cutting. After every five cuts (passes) microscopic pictures of the tool wear profiles were taken, in order to monitor the progressive wear on the rake face and on the flank of the insert. The power load was monitored for each cut taken using an on-board meter on the CNC machine to establish the amount of power needed for each stage of operation. The spindle drive for the machine is an 11 KW/hr motor. Results obtained confirmed the advantages of cutting at all speeds investigated using EDMed coated inserts, in terms of reduced tool wear and low power loads. Moreover, the surface finish on the workpiece was consistently better for the EDMed inserts. The thesis discusses the relevance of the finite element method in the analysis of metal cutting processes, so that metal machinists can design, manufacture and deliver goods (tools) to the market quickly and on time without going through the hassle of trial and error approach for new products. Improvements in manufacturing technologies require better knowledge of modelling metal cutting processes. Technically the use of computational models has a great value in reducing or even eliminating the number of experiments traditionally used for tool design, process selection, machinability evaluation, and chip breakage investigations. In this work, much interest in theoretical and experimental investigations of metal machining were given special attention. Finite element analysis (FEA) was given priority in this study to predict tool wear and coating deformations during machining. Particular attention was devoted to the complicated mechanisms usually associated with metal cutting, such as interfacial friction; heat generated due to friction and severe strain in the cutting region, and high strain rates. It is therefore concluded that Roughened contact surface comprising of peaks and valleys coated with hard materials (TiN) provide wear-resisting properties as the coatings get entrapped in the valleys and help reduce friction at chip-tool interface. The contributions to knowledge: a. Relates to a wear-resisting surface structure for application in contact surfaces and structures in metal cutting and forming tools with ability to give wear-resisting surface profile. b. Provide technique for designing tool with roughened surface comprising of peaks and valleys covered in conformal coating with a material such as TiN, TiC etc which is wear-resisting structure with surface roughness profile compose of valleys which entrap residual coating material during wear thereby enabling the entrapped coating material to give improved wear resistance. c. Provide knowledge for increased tool life through wear resistance, hardness and chemical stability at high temperatures because of reduced friction at the tool-chip and work-tool interfaces due to tool coating, which leads to reduced heat generation at the cutting zones. d. Establishes that Undulating surface topographies on cutting tips tend to hold coating materials longer in the valleys, thus giving enhanced protection to the tool and the tool can cut faster by 40% and last 60% longer than conventional tools on the markets today.
Resumo:
A quantificação do material sólido transportado (transporte sólido) ao longo de um curso de água é extremamente importante nas mais variadas áreas da engenharia fluvial. O transporte sólido em rios de montanha dá-se maioritariamente por arrastamento no fundo, através de deslizamento, rolamento e saltação dos sedimentos. Ao longo dos tempos foram desenvolvidas várias fórmulas para estimar o transporte sólido por arrastamento, contudo, devido à complexidade dos processos de transporte de sedimentos, bem como a variabilidade espacial e temporal, a previsão de taxas de transporte não foi conseguida exclusivamente através de investigação teórica. Para obter um melhor conhecimento sobre os processos de transporte sólido por arrastamento em rios de montanha, torna-se necessário monitorizá-los com maior precisão possível. Com os avanços na electrónica, novos métodos tecnológicos foram desenvolvidos para resolver a problemática da quantificação do transporte sólido, em detrimento dos atuais métodos tradicionais, que se baseiam na recolha de amostras em campo, para posterior correlação. O objetivo principal da presente dissertação foi o desenvolvimento de um equipamento capaz de estimar/monitorizar continuamente o transporte sólido por arrastamento em rios de montanha, que utilizasse tecnologia de baixo custo. Este equipamento dispõe de um sensor piezolelétrico que realizará medições à vibração causada pelo embate dos sedimentos sobre uma chapa metálica. A energia do sinal resultante dos impactos reverterá em peso. A metodologia usada para a obtenção das medições foi a realização de ensaios laboratoriais, tendo sido dado especial destaque à influência da variação do caudal, bem como da forma dos sedimentos, na intensidade do sinal adquirido.