918 resultados para Self Organising Systems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives: This study evaluated the immediate and 6-month resin-dentin mu-bond strength (mu TBS) of one-step self-etch systems (Adper Prompt L-Pop [AD] 3M ESPE; Xeno III [XE] Dentsply De Trey; iBond [iB] Heraeus Kulzer) under different application modes. Materials and methods: Dentin oclusal surfaces were exposed by grinding with 600-grit SiC paper. The adhesives were applied according to the manufacturer`s directions [MD], or with double application of the adhesive layer [DA] or following the manufacturer`s directions plus a hydrophobic resin layer coating [HL]. After applying the adhesive resins, composite crowns were built up incrementally. After 24-h water storage, the specimens were serially sectioned in ""x"" and ""y"" directions to obtain bonded sticks of about 0.8 mm 2 to be tested immediately [IM] or after 6 months of water storage [6M] at a crosshead speed of 0.5 mm/min. The data from each adhesive was analyzed by a two-way repeated measures ANOVA (mode of application vs. storage time) and Tukey`s test (alpha = 0.05). Results: The adhesives performed differently according to the application mode. The DA and HL either improved the immediate performance of the adhesive or did not differ from the MD. The resin-dentin bond strength values observed after 6 months were higher when a hydrophobic resin coat was used than compared to those values observed under the manufacturer`s directions. Conclusions: The double application of one-step self-etch system can be safety performed however the application of an additional hydrophobic resin layer can improve the immediate resin-dentin bonds and reduce the degradation of resin bonds over time. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The synthesis of magnetic nanoparticles with monodispere size distributions, their self assembly into ordered arrays and their magnetic behavior as a function of structural order (ferrofluids and 2D assemblies) are presented. Magnetic colloids of monodispersed, passivated, cobalt nanocrystals were produced by the rapid pyrolysis of cobalt carbonyl in solution. The size, size distribution (std. dev.< 5%) and the shape of the nanocrystals were controlled by varying the surfactant, its concentration, the reaction rate and the reaction temperature. The Co particles are defect-free single crystals with a complex cubic structure related to the beta phase of manganese (epsilon-Co). In the 2D assembly, a collective behavior was observed in the low-field susceptibility measurements where the magnetization of the zero field cooled process increases steadily and the magnetization of the field cooling process is independent the temperature. This was different from the observed behavior in a sample comprised of disordered interacting particles. A strong paramagnetic contribution appears at very low temperatures where the magnetization increases drastically after field cooling the sample. This has been attributed to the Co surfactant-particle interface since no magnetic atomic impurities are present in these samples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main function of a roadway culvert is to effectively convey drainage flow during normal and extreme hydrologic conditions. This function is often impaired due to the sedimentation blockage of the culvert. This research sought to understand the mechanics of sedimentation process at multi-box culverts, and develop self-cleaning systems that flush out sediment deposits using the power of drainage flows. The research entailed field observations, laboratory experiments, and numerical simulations. The specific role of each of these investigative tools is summarized below: a) The field observations were aimed at understanding typical sedimentation patterns and their dependence on culvert geometry and hydrodynamic conditions during normal and extreme hydrologic events. b) The laboratory experiments were used for modeling sedimentation process observed insitu and for testing alternative self-cleaning concepts applied to culverts. The major tasks for the initial laboratory model study were to accurately replicate the culvert performance curves and the dynamics of sedimentation process, and to provide benchmark data for numerical simulation validation. c) The numerical simulations enhanced the understanding of the sedimentation processes and aided in testing flow cases complementary to those conducted in the model reducing the number of (more expensive) tests to be conducted in the laboratory. Using the findings acquired from the laboratory and simulation works, self-cleaning culvert concepts were developed and tested for a range of flow conditions. The screening of the alternative concepts was made through experimental studies in a 1:20 scale model guided by numerical simulations. To ensure the designs are effective, performance studies were finally conducted in a 1:20 hydraulic model using the most promising design alternatives to make sure that the proposed systems operate satisfactory under closer to natural scale conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent-based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve self-managing distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent-based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve self-managing distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve autonomy for distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work reported in this paper proposes Swarm-Array computing, a novel technique inspired by swarm robotics, and built on the foundations of autonomic and parallel computing. The approach aims to apply autonomic computing constructs to parallel computing systems and in effect achieve the self-ware objectives that describe self-managing systems. The constitution of swarm-array computing comprising four constituents, namely the computing system, the problem/task, the swarm and the landscape is considered. Approaches that bind these constituents together are proposed. Space applications employing FPGAs are identified as a potential area for applying swarm-array computing for building reliable systems. The feasibility of a proposed approach is validated on the SeSAm multi-agent simulator and landscapes are generated using the MATLAB toolkit.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Statement of the problem: The performance of self-etch systems on enamel is controversial and seems to be dependent on the application technique and the enamel preparation. Purpose of the Study: To examine the effects of conditioning time and enamel surface preparation on bond strength and etching pattern of adhesive systems to enamel. Materials and Methods: Ninety-six teeth were divided into 16 conditions (N = 6) in function of enamel preparation and conditioning time for bond strength test. The adhesive systems OptiBond FL (Kerr, Orange, CA, USA), OptiBond SOLO Plus (Kerr), Clearfil SE Bond (Kuraray, Osaka, Japan), and Adper Prompt L-Pop (3M ESPE, St. Paul, MN, USA) were applied on unground or ground enamel following the manufacturers` directions or doubling the conditioning time. Cylinders of Filtek Flow (0.5-mm height) were applied to each bonded enamel surface using a Tygon tube (0.7 mm in diameter; Saint-Gobain Corp., Aurora, OH, USA). After storage (24 h/37 degrees C), the specimens were subjected to shear force (0.5 mm/min). The data were treated by a three-way analysis of variance and Tukey`s test (alpha = 0.05). The failure modes of the debonded interfaces and the etching pattern of adhesives were observed using scanning electron microscopy. Results: Only the main factor ""adhesive"" was statistically significant (p < 0.001). The lowest bond strength value was observed for OptiBond FL. The most defined etching pattern was observed for 35% phosphoric acid and for Adper Prompt L-Pop. Mixed failures were observed for all adhesives, but OptiBond FL showed cohesive failures in resin predominantly. Conclusions: The increase in the conditioning time as well as the enamel pretreatment did not provide an increase in the resin-enamel bond strength values for the studied adhesives. CLINICAL SIGNIFICANCE The surface enamel preparation and the conditioning time do not affect the performance of self-etch systems to enamel. (J Esthet Restor Dent 20:322-336, 2008)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bolted joints are a form of mechanical coupling largely used in machinery due to their reliability and low cost. Failure of bolted joints can lead to catastrophic events, such as leaking, train derailments, aircraft crashes, etc. Most of these failures occur due to the reduction of the pre-load, induced by mechanical vibration or human errors in the assembly or maintenance process. This article investigates the application of shape memory alloy (SMA) washers as an actuator to increase the pre-load on loosened bolted joints. The application of SMA washer follows a structural health monitoring procedure to identify a damage (reduction in pre-load) occurrence. In this article, a thermo-mechanical model is presented to predict the final pre-load achieved using this kind of actuator, based on the heat input and SMA washer dimension. This model extends and improves on the previous model of Ghorashi and Inman [2004, "Shape Memory Alloy in Tension and Compression and its Application as Clamping Force Actuator in a Bolted Joint: Part 2 - Modeling," J. Intell. Mater. Syst. Struct., 15:589-600], by eliminating the pre-load term related to nut turning making the system more practical. This complete model is a powerful but complex tool to be used by designers. A novel modeling approach for self-healing bolted joints based on curve fitting of experimental data is presented. The article concludes with an experimental application that leads to a change in joint assembly to increase the system reliability, by removing the ceramic washer component. Further research topics are also suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In engineering practical systems the excitation source is generally dependent on the system dynamic structure. In this paper we analyze a self-excited oscillating system due to dry friction which interacts with an energy source of limited power supply (non ideal problem). The mechanical system consists of an oscillating system sliding on a moving belt driven by a limited power supply. In the oscillating system considered here, dry friction acts as an excitation mechanism for stick-slip oscillations. The stick-slip chaotic oscillations are investigated because the knowledge of their dynamic characteristics is an important step in system design and control. Many engineering systems present stick-slip chaotic oscillations such as machine tools, oil well drillstrings, car brakes and others.