150 resultados para religião e representação do tempo
Resumo:
This thesis proposes the specification and performance analysis of a real-time communication mechanism for IEEE 802.11/11e standard. This approach is called Group Sequential Communication (GSC). The GSC has a better performance for dealing with small data packets when compared to the HCCA mechanism by adopting a decentralized medium access control using a publish/subscribe communication scheme. The main objective of the thesis is the HCCA overhead reduction of the Polling, ACK and QoS Null frames exchanged between the Hybrid Coordinator and the polled stations. The GSC eliminates the polling scheme used by HCCA scheduling algorithm by using a Virtual Token Passing procedure among members of the real-time group to whom a high-priority and sequential access to communication medium is granted. In order to improve the reliability of the mechanism proposed into a noisy channel, it is presented an error recovery scheme called second chance algorithm. This scheme is based on block acknowledgment strategy where there is a possibility of retransmitting when missing real-time messages. Thus, the GSC mechanism maintains the real-time traffic across many IEEE 802.11/11e devices, optimized bandwidth usage and minimal delay variation for data packets in the wireless network. For validation purpose of the communication scheme, the GSC and HCCA mechanisms have been implemented in network simulation software developed in C/C++ and their performance results were compared. The experiments show the efficiency of the GSC mechanism, especially in industrial communication scenarios.
Resumo:
The goal of this work is to propose a SLAM (Simultaneous Localization and Mapping) solution based on Extended Kalman Filter (EKF) in order to make possible a robot navigates along the environment using information from odometry and pre-existing lines on the floor. Initially, a segmentation step is necessary to classify parts of the image in floor or non floor . Then the image processing identifies floor lines and the parameters of these lines are mapped to world using a homography matrix. Finally, the identified lines are used in SLAM as landmarks in order to build a feature map. In parallel, using the corrected robot pose, the uncertainty about the pose and also the part non floor of the image, it is possible to build an occupancy grid map and generate a metric map with the obstacle s description. A greater autonomy for the robot is attained by using the two types of obtained map (the metric map and the features map). Thus, it is possible to run path planning tasks in parallel with localization and mapping. Practical results are presented to validate the proposal
Resumo:
In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
The predictive control technique has gotten, on the last years, greater number of adepts in reason of the easiness of adjustment of its parameters, of the exceeding of its concepts for multi-input/multi-output (MIMO) systems, of nonlinear models of processes could be linearised around a operating point, so can clearly be used in the controller, and mainly, as being the only methodology that can take into consideration, during the project of the controller, the limitations of the control signals and output of the process. The time varying weighting generalized predictive control (TGPC), studied in this work, is one more an alternative to the several existing predictive controls, characterizing itself as an modification of the generalized predictive control (GPC), where it is used a reference model, calculated in accordance with parameters of project previously established by the designer, and the application of a new function criterion, that when minimized offers the best parameters to the controller. It is used technique of the genetic algorithms to minimize of the function criterion proposed and searches to demonstrate the robustness of the TGPC through the application of performance, stability and robustness criterions. To compare achieves results of the TGPC controller, the GCP and proportional, integral and derivative (PID) controllers are used, where whole the techniques applied to stable, unstable and of non-minimum phase plants. The simulated examples become fulfilled with the use of MATLAB tool. It is verified that, the alterations implemented in TGPC, allow the evidence of the efficiency of this algorithm
Resumo:
This work introduces a new method for environment mapping with three-dimensional information from visual information for robotic accurate navigation. Many approaches of 3D mapping using occupancy grid typically requires high computacional effort to both build and store the map. We introduce an 2.5-D occupancy-elevation grid mapping, which is a discrete mapping approach, where each cell stores the occupancy probability, the height of the terrain at current place in the environment and the variance of this height. This 2.5-dimensional representation allows that a mobile robot to know whether a place in the environment is occupied by an obstacle and the height of this obstacle, thus, it can decide if is possible to traverse the obstacle. Sensorial informations necessary to construct the map is provided by a stereo vision system, which has been modeled with a robust probabilistic approach, considering the noise present in the stereo processing. The resulting maps favors the execution of tasks like decision making in the autonomous navigation, exploration, localization and path planning. Experiments carried out with a real mobile robots demonstrates that this proposed approach yields useful maps for robot autonomous navigation
Resumo:
ART networks present some advantages: online learning; convergence in a few epochs of training; incremental learning, etc. Even though, some problems exist, such as: categories proliferation, sensitivity to the presentation order of training patterns, the choice of a good vigilance parameter, etc. Among the problems, the most important is the category proliferation that is probably the most critical. This problem makes the network create too many categories, consuming resources to store unnecessarily a large number of categories, impacting negatively or even making the processing time unfeasible, without contributing to the quality of the representation problem, i. e., in many cases, the excessive amount of categories generated by ART networks makes the quality of generation inferior to the one it could reach. Another factor that leads to the category proliferation of ART networks is the difficulty of approximating regions that have non-rectangular geometry, causing a generalization inferior to the one obtained by other methods of classification. From the observation of these problems, three methodologies were proposed, being two of them focused on using a most flexible geometry than the one used by traditional ART networks, which minimize the problem of categories proliferation. The third methodology minimizes the problem of the presentation order of training patterns. To validate these new approaches, many tests were performed, where these results demonstrate that these new methodologies can improve the quality of generalization for ART networks
Resumo:
In this Thesis, the development of the dynamic model of multirotor unmanned aerial vehicle with vertical takeoff and landing characteristics, considering input nonlinearities and a full state robust backstepping controller are presented. The dynamic model is expressed using the Newton-Euler laws, aiming to obtain a better mathematical representation of the mechanical system for system analysis and control design, not only when it is hovering, but also when it is taking-off, or landing, or flying to perform a task. The input nonlinearities are the deadzone and saturation, where the gravitational effect and the inherent physical constrains of the rotors are related and addressed. The experimental multirotor aerial vehicle is equipped with an inertial measurement unit and a sonar sensor, which appropriately provides measurements of attitude and altitude. A real-time attitude estimation scheme based on the extended Kalman filter using quaternions was developed. Then, for robustness analysis, sensors were modeled as the ideal value with addition of an unknown bias and unknown white noise. The bounded robust attitude/altitude controller were derived based on globally uniformly practically asymptotically stable for real systems, that remains globally uniformly asymptotically stable if and only if their solutions are globally uniformly bounded, dealing with convergence and stability into a ball of the state space with non-null radius, under some assumptions. The Lyapunov analysis technique was used to prove the stability of the closed-loop system, compute bounds on control gains and guaranteeing desired bounds on attitude dynamics tracking errors in the presence of measurement disturbances. The controller laws were tested in numerical simulations and in an experimental hexarotor, developed at the UFRN Robotics Laboratory
Resumo:
The present work presents an algorithm proposal, which aims for controlling and improving idle time to be applied in oil production wells equipped with beam pump. The algorithm was totally designed based on existing papers and data acquired from two Potiguar Basin pilot wells. Oil engineering concepts such as submergence, pump off, Basic Sediments and Water (BSW), Inflow Performance Relationship (IPR), reservo ir pressure, inflow pressure, among others, were included into the algorithm through a mathematical treatment developed from a typical well and then extended to the general cases. The optimization will increase the well production potential maximum utilization having the smallest number of pumping unit cycles directly reflecting on operational cost and electricity consumption reduction
Resumo:
The incorporate of industrial automation in the medical are requires mechanisms to safety and efficient establishment of communication between biomedical devices. One solution to this problem is the MP-HA (Multicycles Protocol to Hospital Automation) that down a segmented network by beds coordinated by an element called Service Provider. The goal of this work is to model this Service Provider and to do performance analysis of the activities executed by in establishment and maintenance of hospital networks
Resumo:
In this work, we propose a Geographical Information System that can be used as a tool for the treatment and study of problems related with environmental and city management issues. It is based on the Scalable Vector Graphics (SVG) standard for Web development of graphics. The project uses the concept of remate and real-time mar creation by database access through instructions executed by browsers on the Internet. As a way of proving the system effectiveness, we present two study cases;.the first on a region named Maracajaú Coral Reefs, located in Rio Grande do Norte coast, and the second in the Switzerland Northeast in which we intended to promote the substitution of MapServer by the system proposed here. We also show some results that demonstrate the larger geographical data capability achieved by the use of the standardized codes and open source tools, such as Extensible Markup Language (XML), Document Object Model (DOM), script languages ECMAScript/ JavaScript, Hypertext Preprocessor (PHP) and PostgreSQL and its extension, PostGIS
Resumo:
Several mobile robots show non-linear behavior, mainly due friction phenomena between the mechanical parts of the robot or between the robot and the ground. Linear models are efficient in some cases, but it is necessary take the robot non-linearity in consideration when precise displacement and positioning are desired. In this work a parametric model identification procedure for a mobile robot with differential drive that considers the dead-zone in the robot actuators is proposed. The method consists in dividing the system into Hammerstein systems and then uses the key-term separation principle to present the input-output relations which shows the parameters from both linear and non-linear blocks. The parameters are then simultaneously estimated through a recursive least squares algorithm. The results shows that is possible to identify the dead-zone thresholds together with the linear parameters
Resumo:
The opening of the Brazilian market of electricity and competitiveness between companies in the energy sector make the search for useful information and tools that will assist in decision making activities, increase by the concessionaires. An important source of knowledge for these utilities is the time series of energy demand. The identification of behavior patterns and description of events become important for the planning execution, seeking improvements in service quality and financial benefits. This dissertation presents a methodology based on mining and representation tools of time series, in order to extract knowledge that relate series of electricity demand in various substations connected of a electric utility. The method exploits the relationship of duration, coincidence and partial order of events in multi-dimensionals time series. To represent the knowledge is used the language proposed by Mörchen (2005) called Time Series Knowledge Representation (TSKR). We conducted a case study using time series of energy demand of 8 substations interconnected by a ring system, which feeds the metropolitan area of Goiânia-GO, provided by CELG (Companhia Energética de Goiás), responsible for the service of power distribution in the state of Goiás (Brazil). Using the proposed methodology were extracted three levels of knowledge that describe the behavior of the system studied, representing clearly the system dynamics, becoming a tool to assist planning activities
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
Hard metals are the composite developed in 1923 by Karl Schröter, with wide application because high hardness, wear resistance and toughness. It is compound by a brittle phase WC and a ductile phase Co. Mechanical properties of hardmetals are strongly dependent on the microstructure of the WC Co, and additionally affected by the microstructure of WC powders before sintering. An important feature is that the toughness and the hardness increase simultaneously with the refining of WC. Therefore, development of nanostructured WC Co hardmetal has been extensively studied. There are many methods to manufacture WC-Co hard metals, including spraying conversion process, co-precipitation, displacement reaction process, mechanochemical synthesis and high energy ball milling. High energy ball milling is a simple and efficient way of manufacturing the fine powder with nanostructure. In this process, the continuous impacts on the powders promote pronounced changes and the brittle phase is refined until nanometric scale, bring into ductile matrix, and this ductile phase is deformed, re-welded and hardened. The goal of this work was investigate the effects of highenergy milling time in the micro structural changes in the WC-Co particulate composite, particularly in the refinement of the crystallite size and lattice strain. The starting powders were WC (average particle size D50 0.87 μm) supplied by Wolfram, Berglau-u. Hutten - GMBH and Co (average particle size D50 0.93 μm) supplied by H.C.Starck. Mixing 90% WC and 10% Co in planetary ball milling at 2, 10, 20, 50, 70, 100 and 150 hours, BPR 15:1, 400 rpm. The starting powders and the milled particulate composite samples were characterized by X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) to identify phases and morphology. The crystallite size and lattice strain were measured by Rietveld s method. This procedure allowed obtaining more precise information about the influence of each one in the microstructure. The results show that high energy milling is efficient manufacturing process of WC-Co composite, and the milling time have great influence in the microstructure of the final particles, crushing and dispersing the finely WC nanometric order in the Co particles