978 resultados para Gaussian beams
Resumo:
Recent data compiled by the National Bridge Inventory revealed 29% of Iowa's approximate 24,600 bridges were either structurally deficient or functionally obsolete. This large number of deficient bridges and the high cost of needed repairs create unique problems for Iowa and many other states. The research objective of this project was to determine the load capacity of a particular type of deteriorating bridge – the precast concrete deck bridge – which is commonly found on Iowa's secondary roads. The number of these precast concrete structures requiring load postings and/or replacement can be significantly reduced if the deteriorated structures are found to have adequate load capacity or can be reliably evaluated. Approximately 600 precast concrete deck bridges (PCDBs) exist in Iowa. A typical PCDB span is 19 to 36 ft long and consists of eight to ten simply supported precast panels. Bolts and either a pipe shear key or a grouted shear key are used to join adjacent panels. The panels resemble a steel channel in cross-section; the web is orientated horizontally and forms the roadway deck and the legs act as shallow beams. The primary longitudinal reinforcing steel bundled in each of the legs frequently corrodes and causes longitudinal cracks in the concrete and spalling. The research team performed service load tests on four deteriorated PCDBs; two with shear keys in place and two without. Conventional strain gages were used to measure strains in both the steel and concrete, and transducers were used to measure vertical deflections. Based on the field results, it was determined that these bridges have sufficient lateral load distribution and adequate strength when shear keys are properly installed between adjacent panels. The measured lateral load distribution factors are larger than AASHTO values when shear keys were not installed. Since some of the reinforcement had hooks, deterioration of the reinforcement has a minimal affect on the service level performance of the bridges when there is minimal loss of cross-sectional area. Laboratory tests were performed on the PCDB panels obtained from three bridge replacement projects. Twelve deteriorated panels were loaded to failure in a four point bending arrangement. Although the panels had significant deflections prior to failure, the experimental capacity of eleven panels exceeded the theoretical capacity. Experimental capacity of the twelfth panel, an extremely distressed panel, was only slightly below the theoretical capacity. Service tests and an ultimate strength test were performed on a laboratory bridge model consisting of four joined panels to determine the effect of various shear connection configurations. These data were used to validate a PCDB finite element model that can provide more accurate live load distribution factors for use in rating calculations. Finally, a strengthening system was developed and tested for use in situations where one or more panels of an existing PCDB need strengthening.
Resumo:
The main objective of this study is to determine the effectiveness of the Electrochemical Chloride Extraction (ECE) technique on a bridge deck with very high concentrations of chloride. This ECE technique was used during the summer of 2003 to reverse the effects of corrosion, which had occurred in the reinforcing steel embedded in the pedestrian bridge deck over Highway 6, along Iowa Avenue, in Iowa City, Iowa, USA. First, the half cell potential was measured to determine the existing corrosion level in the field. The half-cell potential values were in the indecisive range of corrosion (between -200 mV and -350 mV). The ECE technique was then applied to remove the chloride from the bridge deck. The chloride content in the deck was significantly reduced from 25 lb/cy to 4.96 lb/cy in 8 weeks. Concrete cores obtained from the deck were measured for their compressive strengths and there was no reduction in strength due to the ECE technique. Laboratory tests were also performed to demonstrate the effectiveness of the ECE process. In order to simulate the corrosion in the bridge deck, two reinforced slabs and 12 reinforced beams were prepared. First, the half-cell potentials were measured from the test specimens and they all ranged below -200 mV. Upon introduction of 3% salt solution, the potential reached up to -500 mV. This potential was maintained while a salt solution was being added for six months. The ECE technique was then applied to the test specimens in order to remove the chloride from them. Half-cell potential was measured to determine if the ECE technique can effectively reduce the level of corrosion.
Resumo:
The ends of prestressed concrete beams under expansion joints are often exposed to moisture and chlorides. Left unprotected, the moisture and chlorides come in contact with the ends of the prestressing strands and/or the mild reinforcing, resulting in corrosion. Once deterioration begins, it progresses unless some process is employed to address it. Deterioration can lead to loss of bearing area and therefore a reduction in bridge capacity. Previous research has looked into the use of concrete coatings (silanes, epoxies, fiber-reinforced polymers, etc.) for protecting prestressed concrete beam ends but found that little to no laboratory research has been done related to the performance of these coatings in this specific type of application. The Iowa Department of Transportation (DOT) currently specifies coating the ends of exposed prestressed concrete beams with Sikagard 62 (a high-build, protective, solvent-free, epoxy coating) at the precast plant prior to installation on the bridge. However, no physical testing of Sikagard 62 in this application has been completed. In addition, the Iowa DOT continues to see deterioration in the prestressed concrete beam ends, even those treated with Sikagard 62. The goals of this project were to evaluate the performance of the Iowa DOT-specified beam-end coating as well as other concrete coating alternatives based on the American Association of State Highway and Transportation Officials (AASHTO) T259-80 chloride ion penetration test and to test their performance on in-service bridges throughout the duration of the project. In addition, alternative beam-end forming details were developed and evaluated for their potential to mitigate and/or eliminate the deterioration caused by corrosion of the prestressing strands on prestressed concrete beam ends used in bridges with expansion joints. The alternative beam-end details consisted of individual strand blockouts, an individual blockout for a cluster of strands, dual blockouts for two clusters of strands, and drilling out the strands after they are flush cut. The goal of all of the forming alternatives was to offset the ends of the prestressing strands from the end face of the beam and then cover them with a grout/concrete layer, thereby limiting or eliminating their exposure to moisture and chlorides.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
We analyse the variations produced on tsunami propagation and impact over a straight coastline because of the presence of a submarine canyon incised in the continental margin. For ease of calculation we assume that the shoreline and the shelf edge are parallel and that the incident wave approaches them normally. A total of 512 synthetic scenarios have been computed by combining the bathymetry of a continental margin incised by a parameterised single canyon and the incident tsunami waves. The margin bathymetry, the canyon and the tsunami waves have been generated using mathematical functions (e.g. Gaussian). Canyon parameters analysed are: (i) incision length into the continental shelf, which for a constant shelf width relates directly to the distance from the canyon head to the coast, (ii) canyon width, and (iii) canyon orientation with respect to the shoreline. Tsunami wave parameters considered are period and sign. The COMCOT tsunami model from Cornell University was applied to propagate the waves across the synthetic bathymetric surfaces. Five simulations of tsunami propagation over a non-canyoned margin were also performed for reference. The analysis of the results reveals a strong variation of tsunami arrival times and amplitudes reaching the coastline when a tsunami wave travels over a submarine canyon, with changing maximum height location and alongshore extension. In general, the presence of a submarine canyon lowers the arrival time to the shoreline but prevents wave build-up just over the canyon axis. This leads to a decrease in tsunami amplitude at the coastal stretch located just shoreward of the canyon head, which results in a lower run-up in comparison with a non-canyoned margin. Contrarily, an increased wave build-up occurs on both sides of the canyon head, generating two coastal stretches with an enhanced run-up. These aggravated or reduced tsunami effects are modified with (i) proximity of the canyon tip to the coast, amplifying the wave height, (ii) canyon width, enlarging the areas with lower and higher maximum height wave along the coastline, and (iii) canyon obliquity with respect to the shoreline and shelf edge, increasing wave height shoreward of the leeward flank of the canyon. Moreover, the presence of a submarine canyon near the coast produces a variation of wave energy along the shore, eventually resulting in edge waves shoreward of the canyon head. Edge waves subsequently spread out alongshore reaching significant amplitudes especially when coupling with tsunami secondary waves occurs. Model results have been groundtruthed using the actual bathymetry of Blanes Canyon area in the North Catalan margin. This paper underlines the effects of the presence, morphology and orientation of submarine canyons as a determining factor on tsunami propagation and impact, which could prevail over other effects deriving from coastal configuration.
Resumo:
Back-focal-plane interferometry is used to measure displacements of optically trapped samples with very high spatial and temporal resolution. However, the technique is closely related to a method that measures the rate of change in light momentum. It has long been known that displacements of the interference pattern at the back focal plane may be used to track the optical force directly, provided that a considerable fraction of the light is effectively monitored. Nonetheless, the practical application of this idea has been limited to counter-propagating, low-aperture beams where the accurate momentum measurements are possible. Here, we experimentally show that the connection can be extended to single-beam optical traps. In particular, we show that, in a gradient trap, the calibration product κ·β (where κ is the trap stiffness and 1/β is the position sensitivity) corresponds to the factor that converts detector signals into momentum changes; this factor is uniquely determined by three construction features of the detection instrument and does not depend, therefore, on the specific conditions of the experiment. Then, we find that force measurements obtained from back-focal-plane displacements are in practice not restricted to a linear relationship with position and hence they can be extended outside that regime. Finally, and more importantly, we show that these properties are still recognizable even when the system is not fully optimized for light collection. These results should enable a more general use of back-focal-plane interferometry whenever the ultimate goal is the measurement of the forces exerted by an optical trap.
Resumo:
Back-focal-plane interferometry is used to measure displacements of optically trapped samples with very high spatial and temporal resolution. However, the technique is closely related to a method that measures the rate of change in light momentum. It has long been known that displacements of the interference pattern at the back focal plane may be used to track the optical force directly, provided that a considerable fraction of the light is effectively monitored. Nonetheless, the practical application of this idea has been limited to counter-propagating, low-aperture beams where the accurate momentum measurements are possible. Here, we experimentally show that the connection can be extended to single-beam optical traps. In particular, we show that, in a gradient trap, the calibration product κ·β (where κ is the trap stiffness and 1/β is the position sensitivity) corresponds to the factor that converts detector signals into momentum changes; this factor is uniquely determined by three construction features of the detection instrument and does not depend, therefore, on the specific conditions of the experiment. Then, we find that force measurements obtained from back-focal-plane displacements are in practice not restricted to a linear relationship with position and hence they can be extended outside that regime. Finally, and more importantly, we show that these properties are still recognizable even when the system is not fully optimized for light collection. These results should enable a more general use of back-focal-plane interferometry whenever the ultimate goal is the measurement of the forces exerted by an optical trap.
Resumo:
We compute the exact vacuum expectation value of 1/2 BPS circular Wilson loops of TeX = 4 U(N) super Yang-Mills in arbitrary irreducible representations. By localization arguments, the computation reduces to evaluating certain integrals in a Gaussian matrix model, which we do using the method of orthogonal polynomials. Our results are particularly simple for Wilson loops in antisymmetric representations; in this case, we observe that the final answers admit an expansion where the coefficients are positive integers, and can be written in terms of sums over skew Young diagrams. As an application of our results, we use them to discuss the exact Bremsstrahlung functions associated to the corresponding heavy probes.
Resumo:
This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.
A performance lower bound for quadratic timing recovery accounting for the symbol transition density
Resumo:
The symbol transition density in a digitally modulated signal affects the performance of practical synchronization schemes designed for timing recovery. This paper focuses on the derivation of simple performance limits for the estimation of the time delay of a noisy linearly modulated signal in the presence of various degrees of symbol correlation produced by the varioustransition densities in the symbol streams. The paper develops high- and low-signal-to-noise ratio (SNR) approximations of the so-called (Gaussian) unconditional Cramér–Rao bound (UCRB),as well as general expressions that are applicable in all ranges of SNR. The derived bounds are valid only for the class of quadratic, non-data-aided (NDA) timing recovery schemes. To illustrate the validity of the derived bounds, they are compared with the actual performance achieved by some well-known quadratic NDA timing recovery schemes. The impact of the symbol transitiondensity on the classical threshold effect present in NDA timing recovery schemes is also analyzed. Previous work on performancebounds for timing recovery from various authors is generalized and unified in this contribution.
Resumo:
This paper analyzes the asymptotic performance of maximum likelihood (ML) channel estimation algorithms in wideband code division multiple access (WCDMA) scenarios. We concentrate on systems with periodic spreading sequences (period larger than or equal to the symbol span) where the transmitted signal contains a code division multiplexed pilot for channel estimation purposes. First, the asymptotic covariances of the training-only, semi-blind conditional maximum likelihood (CML) and semi-blind Gaussian maximum likelihood (GML) channelestimators are derived. Then, these formulas are further simplified assuming randomized spreading and training sequences under the approximation of high spreading factors and high number of codes. The results provide a useful tool to describe the performance of the channel estimators as a function of basicsystem parameters such as number of codes, spreading factors, or traffic to training power ratio.
Resumo:
The objective of this paper is to introduce a fourth-order cost function of the displaced frame difference (DFD) capable of estimatingmotion even for small regions or blocks. Using higher than second-orderstatistics is appropriate in case the image sequence is severely corruptedby additive Gaussian noise. Some results are presented and compared to those obtained from the mean kurtosis and the mean square error of the DFD.
Resumo:
In this paper we develop a new linear approach to identify the parameters of a moving average (MA) model from the statistics of the output. First, we show that, under some constraints, the impulse response of the system can be expressed as a linear combination of cumulant slices. Then, thisresult is used to obtain a new well-conditioned linear methodto estimate the MA parameters of a non-Gaussian process. Theproposed method presents several important differences withexisting linear approaches. The linear combination of slices usedto compute the MA parameters can be constructed from dif-ferent sets of cumulants of different orders, providing a generalframework where all the statistics can be combined. Further-more, it is not necessary to use second-order statistics (the autocorrelation slice), and therefore the proposed algorithm stillprovides consistent estimates in the presence of colored Gaussian noise. Another advantage of the method is that while mostlinear methods developed so far give totally erroneous estimates if the order is overestimated, the proposed approach doesnot require a previous estimation of the filter order. The simulation results confirm the good numerical conditioning of thealgorithm and the improvement in performance with respect to existing methods.
Resumo:
This paper addresses the estimation of the code-phase(pseudorange) and the carrier-phase of the direct signal received from a direct-sequence spread-spectrum satellite transmitter. Thesignal is received by an antenna array in a scenario with interferenceand multipath propagation. These two effects are generallythe limiting error sources in most high-precision positioning applications.A new estimator of the code- and carrier-phases is derivedby using a simplified signal model and the maximum likelihood(ML) principle. The simplified model consists essentially ofgathering all signals, except for the direct one, in a component withunknown spatial correlation. The estimator exploits the knowledgeof the direction-of-arrival of the direct signal and is much simplerthan other estimators derived under more detailed signal models.Moreover, we present an iterative algorithm, that is adequate for apractical implementation and explores an interesting link betweenthe ML estimator and a hybrid beamformer. The mean squarederror and bias of the new estimator are computed for a numberof scenarios and compared with those of other methods. The presentedestimator and the hybrid beamforming outperform the existingtechniques of comparable complexity and attains, in manysituations, the Cramér–Rao lower bound of the problem at hand.
Resumo:
The objective of this work was to select semivariogram models to estimate the population density of fig fly (Zaprionus indianus; Diptera: Drosophilidae) throughout the year, using ordinary kriging. Nineteen monitoring sites were demarcated in an area of 8,200 m2, cropped with six fruit tree species: persimmon, citrus, fig, guava, apple, and peach. During a 24 month period, 106 weekly evaluations were done in these sites. The average number of adult fig flies captured weekly per trap, during each month, was subjected to the circular, spherical, pentaspherical, exponential, Gaussian, rational quadratic, hole effect, K-Bessel, J-Bessel, and stable semivariogram models, using ordinary kriging interpolation. The models with the best fit were selected by cross-validation. Each data set (months) has a particular spatial dependence structure, which makes it necessary to define specific models of semivariograms in order to enhance the adjustment to the experimental semivariogram. Therefore, it was not possible to determine a standard semivariogram model; instead, six theoretical models were selected: circular, Gaussian, hole effect, K-Bessel, J-Bessel, and stable.