934 resultados para interlocked architectures


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The production of waste from urban and industrial activities is one of the factors of environmental contamination and has aroused attention of the scientific community, in the sense of its reuse. On the other hand, the city of Salvador/Ba, with approximately 262 channels, responsible for storm water runoff, produces every year, by the intervention of cleaning and clearing channels, a significant volume of sediments (dredged mud), and thus an appropriate methodology for their final destination. This study aims to assess the influence of incorporation of these tailings in arrays of clay for production of interlocked block ceramic, also known as ceramic paver. All the raw materials from the metropolitan region of Salvador (RMS) were characterized by x-ray fluorescence, x-ray diffraction, thermal analysis (TG and TDA), particle size analysis and dilatometry. With the use of statistical experimental planning technique, ternary diagram was defined in the study region and the analyzed formulations. The specimens were prepared with dimensions of 60x20x5mm³, by uniaxial pressing of 30 MPa and after sintering at temperatures of 900°, 1000º and 1100ºC the technological properties were evaluated: linear shrinkage, water absorption, apparent porosity, apparent specifies mass, flexural rupture and module. For the uniaxial compression strength used cylindrical probe body with Ø 50 mm. The standard mass (MP) was prepared with 90% by weight of clay and 10% by weight of Channel sediment (SCP), not being verified significant variations in the properties of the final product. With the incorporation of 10% by weight of manganese residue (PFM) and 10% by weight of the Ceramic waste (RCB) in the mass default, in addition to adjusting the plasticity due to less waste clay content, provided increased linear firing shrinkage, due the significant concentration of K2O, forming liquid phase at low temperature, contributing to decreased porosity and mechanical resistance, being 92,5 MPa maximum compressive strength verified. After extract test leachate and soluble, the piece containing 10% of the PFM, was classified as non-hazardous and inert material according to NBR10004/04 ABNT. The results showed the feasibility on using waste, SCP, RCB and PFM clay mass, at temperatures above 900ºC, paver ceramic production, according to the specifications of the technical standards, so that to exceed the 10% of the PFM, it becomes imperative to conduct studies of environmental impacts

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This master dissertation presents the study and implementation of inteligent algorithms to monitor the measurement of sensors involved in natural gas custody transfer processes. To create these algoritmhs Artificial Neural Networks are investigated because they have some particular properties, such as: learning, adaptation, prediction. A neural predictor is developed to reproduce the sensor output dynamic behavior, in such a way that its output is compared to the real sensor output. A recurrent neural network is used for this purpose, because of its ability to deal with dynamic information. The real sensor output and the estimated predictor output work as the basis for the creation of possible sensor fault detection and diagnosis strategies. Two competitive neural network architectures are investigated and their capabilities are used to classify different kinds of faults. The prediction algorithm and the fault detection classification strategies, as well as the obtained results, are presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the growth of energy consumption worldwide, conventional reservoirs, the reservoirs called "easy exploration and production" are not meeting the global energy demand. This has led many researchers to develop projects that will address these needs, companies in the oil sector has invested in techniques that helping in locating and drilling wells. One of the techniques employed in oil exploration process is the reverse time migration (RTM), in English, Reverse Time Migration, which is a method of seismic imaging that produces excellent image of the subsurface. It is algorithm based in calculation on the wave equation. RTM is considered one of the most advanced seismic imaging techniques. The economic value of the oil reserves that require RTM to be localized is very high, this means that the development of these algorithms becomes a competitive differentiator for companies seismic processing. But, it requires great computational power, that it still somehow harms its practical success. The objective of this work is to explore the implementation of this algorithm in unconventional architectures, specifically GPUs using the CUDA by making an analysis of the difficulties in developing the same, as well as the performance of the algorithm in the sequential and parallel version

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper artificial neural network (ANN) based on supervised and unsupervised algorithms were investigated for use in the study of rheological parameters of solid pharmaceutical excipients, in order to develop computational tools for manufacturing solid dosage forms. Among four supervised neural networks investigated, the best learning performance was achieved by a feedfoward multilayer perceptron whose architectures was composed by eight neurons in the input layer, sixteen neurons in the hidden layer and one neuron in the output layer. Learning and predictive performance relative to repose angle was poor while to Carr index and Hausner ratio (CI and HR, respectively) showed very good fitting capacity and learning, therefore HR and CI were considered suitable descriptors for the next stage of development of supervised ANNs. Clustering capacity was evaluated for five unsupervised strategies. Network based on purely unsupervised competitive strategies, classic "Winner-Take-All", "Frequency-Sensitive Competitive Learning" and "Rival-Penalize Competitive Learning" (WTA, FSCL and RPCL, respectively) were able to perform clustering from database, however this classification was very poor, showing severe classification errors by grouping data with conflicting properties into the same cluster or even the same neuron. On the other hand it could not be established what was the criteria adopted by the neural network for those clustering. Self-Organizing Maps (SOM) and Neural Gas (NG) networks showed better clustering capacity. Both have recognized the two major groupings of data corresponding to lactose (LAC) and cellulose (CEL). However, SOM showed some errors in classify data from minority excipients, magnesium stearate (EMG) , talc (TLC) and attapulgite (ATP). NG network in turn performed a very consistent classification of data and solve the misclassification of SOM, being the most appropriate network for classifying data of the study. The use of NG network in pharmaceutical technology was still unpublished. NG therefore has great potential for use in the development of software for use in automated classification systems of pharmaceutical powders and as a new tool for mining and clustering data in drug development

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The number of applications based on embedded systems grows significantly every year, even with the fact that embedded systems have restrictions, and simple processing units, the performance of these has improved every day. However the complexity of applications also increase, a better performance will always be necessary. So even such advances, there are cases, which an embedded system with a single unit of processing is not sufficient to achieve the information processing in real time. To improve the performance of these systems, an implementation with parallel processing can be used in more complex applications that require high performance. The idea is to move beyond applications that already use embedded systems, exploring the use of a set of units processing working together to implement an intelligent algorithm. The number of existing works in the areas of parallel processing, systems intelligent and embedded systems is wide. However works that link these three areas to solve any problem are reduced. In this context, this work aimed to use tools available for FPGA architectures, to develop a platform with multiple processors to use in pattern classification with artificial neural networks

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are some approaches that take advantage of unused computational resources in the Internet nodes - users´ machines. In the last years , the peer-to-peer networks (P2P) have gaining a momentum mainly due to its support for scalability and fault tolerance. However, current P2P architectures present some problems such as nodes overhead due to messages routing, a great amount of nodes reconfigurations when the network topology changes, routing traffic inside a specific network even when the traffic is not directed to a machine of this network, and the lack of a proximity relationship among the P2P nodes and the proximity of these nodes in the IP network. Although some architectures use the information about the nodes distance in the IP network, they use methods that require dynamic information. In this work we propose a P2P architecture to fix the problems afore mentioned. It is composed of three parts. The first part consists of a basic P2P architecture, called SGrid, which maintains a relationship of nodes in the P2P network with their position in the IP network. Its assigns adjacent key regions to nodes of a same organization. The second part is a protocol called NATal (Routing and NAT application layer) that extends the basic architecture in order to remove from the nodes the responsibility of routing messages. The third part consists of a special kind of node, called LSP (Lightware Super-Peer), which is responsible for maintaining the P2P routing table. In addition, this work also presents a simulator that validates the architecture and a module of the Natal protocol to be used in Linux routers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robots are present each time more on several areas of our society, however they are still considered expensive equipments that are restricted to few people. This work con- sists on the development of control techniques and architectures that make possible the construction and programming of low cost robots with low programming and building complexity. One key aspect of the proposed architecture is the use of audio interfaces to control actuators and read sensors, thus allowing the usage of any device that can produce sounds as a control unit of a robot. The work also includes the development of web ba- sed programming environments that allow the usage of computers or mobile phones as control units of the robot, which can be remotely programmed and controlled. The work also includes possible applications of such low cost robotic platform, including mainly its educational usage, which was experimentally validated by teachers and students of seve- ral graduation courses. We also present an analysis of data obtained from interviews done with the students before and after the use of our platform, which confirms its acceptance as a teaching support tool

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The exponential growth in the applications of radio frequency (RF) is accompanied by great challenges as more efficient use of spectrum as in the design of new architectures for multi-standard receivers or software defined radio (SDR) . The key challenge in designing architecture of the software defined radio is the implementation of a wide-band receiver, reconfigurable, low cost, low power consumption, higher level of integration and flexibility. As a new solution of SDR design, a direct demodulator architecture, based on fiveport technology, or multi-port demodulator, has been proposed. However, the use of the five-port as a direct-conversion receiver requires an I/Q calibration (or regeneration) procedure in order to generate the in-phase (I) and quadrature (Q) components of the transmitted baseband signal. In this work, we propose to evaluate the performance of a blind calibration technique without additional knowledge about training or pilot sequences of the transmitted signal based on independent component analysis for the regeneration of I/Q five-port downconversion, by exploiting the information on the statistical properties of the three output signals

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we propose a new approach to Interactive Digital Television (IDTV), aimed to explore the concepts of immersivity. Several architectures have been proposed to IDTV, but they did not explore coherently questions related to immersion. The goal of this thesis consists in defining formally what is immersion and interactivity for digital TV and how they may be used to improve user experience in this new televisive model. The approach raises questions such as the appropriate choice of equipment to assist in the sense of immersion, which forms of interaction between users can be exploited in the interaction-immersion context, if the environment where an immersive and interactive application is used can influence the user experience, and which new forms of interactivity between users, and interactivity among users and interactive applications can be explored with the use of immersion. As one of the goals of this proposal, we point out new solutions to these issues that require further studies. We intend to formalize the concepts that embrace interactivity in the brazilian system of digital TV. In an initial study, this definition is organized into categories or levels of interactivity. From this point are made analisis and specifications to achieve immersion using DTV. We pretend to make some case studies of immersive interactive applications for digital television in order to validate the proposed architecture. We also approach the use of remote devices anda proposal of middleware architecture that allows its use in conjunction with immersive interactive applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ln this work, it was deveIoped a parallel cooperative genetic algorithm with different evolution behaviors to train and to define architectures for MuItiIayer Perceptron neural networks. MuItiIayer Perceptron neural networks are very powerful tools and had their use extended vastIy due to their abiIity of providing great resuIts to a broad range of appIications. The combination of genetic algorithms and parallel processing can be very powerful when applied to the Iearning process of the neural network, as well as to the definition of its architecture since this procedure can be very slow, usually requiring a lot of computational time. AIso, research work combining and appIying evolutionary computation into the design of neural networks is very useful since most of the Iearning algorithms deveIoped to train neural networks only adjust their synaptic weights, not considering the design of the networks architecture. Furthermore, the use of cooperation in the genetic algorithm allows the interaction of different populations, avoiding local minima and helping in the search of a promising solution, acceIerating the evolutionary process. Finally, individuaIs and evolution behavior can be exclusive on each copy of the genetic algorithm running in each task enhancing the diversity of populations

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work shows a theoretical analysis together with numerical and experimental results of transmission characteristics from the microstrip bandpass filters with different geometries. These filters are built over isotropic dielectric substrates. The numerical analysis is made by specifical commercial softwares, like Ansoft Designer and Agilent Advanced Design System (ADS). In addition to these tools, a Matlab Script was built to analyze the filters through the Finite-Difference Time-Domain (FDTD) method. The filters project focused the development of the first stage of filtering in the ITASAT s Transponder receptor, and its integration with the others systems. Some microstrip filters architectures have been studied, aiming the viability of implementation and suitable practical application for the purposes of the ITASAT Project due to its lowspace occupation in the lower UHF frequencies. The ITASAT project is a Universityexperimental project which will build a satellite to integrate the Brazilian Data Collect System s satellite constellation, with efforts of many Brazilian institutes, like for example AEB (Brazilian Spatial Agency), ITA (Technological Institute of Aeronautics), INPE/CRN (National Institute of Spatial Researches/Northeastern Regional Center) and UFRN (Federal University of Rio Grande do Norte). Comparisons were made between numerical and experimental results of all filters, where good agreements could be noticed, reaching the most of the objectives. Also, post-work improvements were suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Industrial automation networks is in focus and is gradually replacing older architectures of systems used in automation world. Among existing automation networks, most prominent standard is the Foundation Fieldbus (FF). This particular standard was chosen for the development of this work thanks to its complete application layer specification and its user interface, organized as function blocks and that allows interoperability among different vendors' devices. Nowadays, one of most seeked solutions on industrial automation are the indirect measurements, that consist in infering a value from measures of other sensors. This can be made through implementation of the so-called software sensors. One of the most used tools in this project and in sensor implementation are artificial neural networks. The absence of a standard solution to implement neural networks in FF environment makes impossible the development of a field-indirect-measurement project, besides other projects involving neural networks, unless a closed proprietary solution is used, which dos not guarantee interoperability among network devices, specially if those are from different vendors. In order to keep the interoperability, this work's goal is develop a solution that implements artificial neural networks in Foundation Fieldbus industrial network environment, based on standard function blocks. Along the work, some results of the solution's implementation are also presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.