21 resultados para Processing Time
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
FERNANDES, Fabiano A. N. et al. Optimization of Osmotic Dehydration of Papaya of followed by air-drying. Food Research Internation, v. 39, p. 492-498, 2006.
Resumo:
Water injection is the most widely used method for supplementary recovery in many oil fields due to various reasons, like the fact that water is an effective displacing agent of low viscosity oils, the water injection projects are relatively simple to establish and the water availability at a relatively low cost. For design of water injection projects is necessary to do reservoir studies in order to define the various parameters needed to increase the effectiveness of the method. For this kind of study can be used several mathematical models classified into two general categories: analytical or numerical. The present work aims to do a comparative analysis between the results presented by flow lines simulator and conventional finite differences simulator; both types of simulators are based on numerical methods designed to model light oil reservoirs subjected to water injection. Therefore, it was defined two reservoir models: the first one was a heterogeneous model whose petrophysical properties vary along the reservoir and the other one was created using average petrophysical properties obtained from the first model. Comparisons were done considering that the results of these two models were always in the same operational conditions. Then some rock and fluid parameters have been changed in both models and again the results were compared. From the factorial design, that was done to study the sensitivity analysis of reservoir parameters, a few cases were chosen to study the role of water injection rate and the vertical position of wells perforations in production forecast. It was observed that the results from the two simulators are quite similar in most of the cases; differences were found only in those cases where there was an increase in gas solubility ratio of the model. Thus, it was concluded that in flow simulation of reservoirs analogous of those now studied, mainly when the gas solubility ratio is low, the conventional finite differences simulator may be replaced by flow lines simulator the production forecast is compatible but the computational processing time is lower.
Resumo:
ART networks present some advantages: online learning; convergence in a few epochs of training; incremental learning, etc. Even though, some problems exist, such as: categories proliferation, sensitivity to the presentation order of training patterns, the choice of a good vigilance parameter, etc. Among the problems, the most important is the category proliferation that is probably the most critical. This problem makes the network create too many categories, consuming resources to store unnecessarily a large number of categories, impacting negatively or even making the processing time unfeasible, without contributing to the quality of the representation problem, i. e., in many cases, the excessive amount of categories generated by ART networks makes the quality of generation inferior to the one it could reach. Another factor that leads to the category proliferation of ART networks is the difficulty of approximating regions that have non-rectangular geometry, causing a generalization inferior to the one obtained by other methods of classification. From the observation of these problems, three methodologies were proposed, being two of them focused on using a most flexible geometry than the one used by traditional ART networks, which minimize the problem of categories proliferation. The third methodology minimizes the problem of the presentation order of training patterns. To validate these new approaches, many tests were performed, where these results demonstrate that these new methodologies can improve the quality of generalization for ART networks
Resumo:
Visual attention is a very important task in autonomous robotics, but, because of its complexity, the processing time required is significant. We propose an architecture for feature selection using foveated images that is guided by visual attention tasks and that reduces the processing time required to perform these tasks. Our system can be applied in bottom-up or top-down visual attention. The foveated model determines which scales are to be used on the feature extraction algorithm. The system is able to discard features that are not extremely necessary for the tasks, thus, reducing the processing time. If the fovea is correctly placed, then it is possible to reduce the processing time without compromising the quality of the tasks outputs. The distance of the fovea from the object is also analyzed. If the visual system loses the tracking in top-down attention, basic strategies of fovea placement can be applied. Experiments have shown that it is possible to reduce up to 60% the processing time with this approach. To validate the method, we tested it with the feature algorithm known as Speeded Up Robust Features (SURF), one of the most efficient approaches for feature extraction. With the proposed architecture, we can accomplish real time requirements of robotics vision, mainly to be applied in autonomous robotics
Resumo:
We propose a multi-resolution, coarse-to-fine approach for stereo matching, where the first matching happens at a different depth for each pixel. The proposed technique has the potential of attenuating several problems faced by the constant depth algorithm, making it possible to reduce the number of errors or the number of comparations needed to get equivalent results. Several experiments were performed to demonstrate the method efficiency, including comparison with the traditional plain correlation technique, where the multi-resolution matching with variable depth, proposed here, generated better results with a smaller processing time
Resumo:
RFID (Radio Frequency Identification) identifies object by using the radio frequency which is a non-contact automatic identification technique. This technology has shown its powerful practical value and potential in the field of manufacturing, retailing, logistics and hospital automation. Unfortunately, the key problem that impacts the application of RFID system is the security of the information. Recently, researchers have demonstrated solutions to security threats in RFID technology. Among these solutions are several key management protocols. This master dissertations presents a performance evaluation of Neural Cryptography and Diffie-Hellman protocols in RFID systems. For this, we measure the processing time inherent in these protocols. The tests was developed on FPGA (Field-Programmable Gate Array) platform with Nios IIr embedded processor. The research methodology is based on the aggregation of knowledge to development of new RFID systems through a comparative analysis between these two protocols. The main contributions of this work are: performance evaluation of protocols (Diffie-Hellman encryption and Neural) on embedded platform and a survey on RFID security threats. According to the results the Diffie-Hellman key agreement protocol is more suitable for RFID systems
Resumo:
In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments
Resumo:
Several methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark
Resumo:
Vegetables drying plays an important role in the field of food dehydration, being a very old practice that was originated from sun drying items of food in order to preserve them to be consumed during the periods of scarcity. One of these vegetables is the tomato, that was originally grown in South America. Tomatoes are easily perishable after being picked up from the tree and this makes the process of tomato dehydration a challenge due to the high amount of water (95%) contained in them. The present research work was mainly intended to develop alternative processes for tomato conservation, by drying slices of skinned and unskinned tomatoes in the in natura form or in the osmotically pre-dehydrated form. Firstly, the best conditions of the osmotic pre-dehydration process were defined including temperature, immersion time and concentration of the osmotic solution, based on the results of water loss, solids gain and weigh reduction of the pre-dehydration tomatoes at different processing conditions. The osmotic solution used was made up of NaCl (5 and 10%) and sucrose (25 and 35%) at different combinations. For a fixed conditions of osmotic pre-dehydration, the drying tests of the pre-processed and in natura tomatoes were carried out in a stove with air circulation and a convective dryer with trays, at two levels of temperature. The sensorial analysis of the osmotically pre-treated and unskinned dehydrated tomatoes was carried out as well as a study on the their shelf-live. The results obtained showed that the drying of the tomatoes took place as a result of the internal control of the water transport, and did not show a constant rate, while two distinct periods of the decreasing phase were observed. The osmotic pre treatment substancially reduced the initial amount of humidity in the tomatoes, thus reducing the necessary time for the product to attain levels of intermediate humidity. The impermeability of the tomato skin was identified as well as the unfavorable influence of the pre-treatment on the unskinned tomatoes, whose solid gain brought about a decrease in the water activity with subsequent reduction of the drying rate. Despite the various simplifications carried out during the development of this study, the proposed diffusive model adjusted to the experimental data satisfactorily, thus making it possible to determine the effective coefficients of diffusion, whose results were consistent and compatible with those found in the current literature. Concerning the higher rates of evaporation and the lowest processing time, the best results were obtained in the drying of the unskinned, in natura tomatoes and of the skinned, pre-dehydrated tomatoes, at 60ºC, both processed in the convective drier. The results of the sensorial analysis of the unskinned and pre-treated product did not prove to be satisfactory. Regarding the shelf-live of the tomatoes, for a period of 45 days, no physicochemical or microbiological alteration of the product was noted
Resumo:
This work studies two methods for drying sunflower grains grown in the western region of Rio Grande do Norte, in the premises of the Instituto Federal de Educação, Ciência e Tecnologia do Rio Grande do Norte - IFRN - Campus Apodi. This initiative was made because of the harvested grain during the harvest, being stored in sheds without any control of temperature, humidity etc. Therewith, many physical, chemical and physiological characteristics are compromised and grains lose much quality for oil production as their germination power. Taking into account that most of the stored grain is used for replanting, the studied methods include drying of grains in a thin layer using an oven with air circulation (fixed bed) and drying in a spouted bed. It was studied the drying of grains in natura, i.e., newly harvested. The fixed bed drying was carried out at temperatures of 40, 50, 60 and 70°C. Experiments in spouted bed were performed based on an experimental design, 2² + 3, with three replications at the central point, where the independent variables were grains load (1500, 2000 and 2500 g) and the temperature of the inlet air (70, 80, and 90 °C), obtaining the drying and desorption equilibrium isotherms. Previously, the characteristic curves of the bed were obtained. Both in the fixed bed as in the spouted bed, drying and desorption curves were obtained by weighing the grains throughout the experiments and measurements of water activity, respectively. The grains drying in the spouted bed showed good results with significant reduction of processing time. The models of FICK and PAGE were fitted to the experimental data, models which will represent the drying of grains both in the fixed bed as in the spouted bed. The desorption curves showed no influence of the processing temperature in the hygroscopic characteristics of the grains. The models of GAB, OSWIN and LUIKOV could well represent the desorption isotherms
Resumo:
Clustering data is a very important task in data mining, image processing and pattern recognition problems. One of the most popular clustering algorithms is the Fuzzy C-Means (FCM). This thesis proposes to implement a new way of calculating the cluster centers in the procedure of FCM algorithm which are called ckMeans, and in some variants of FCM, in particular, here we apply it for those variants that use other distances. The goal of this change is to reduce the number of iterations and processing time of these algorithms without affecting the quality of the partition, or even to improve the number of correct classifications in some cases. Also, we developed an algorithm based on ckMeans to manipulate interval data considering interval membership degrees. This algorithm allows the representation of data without converting interval data into punctual ones, as it happens to other extensions of FCM that deal with interval data. In order to validate the proposed methodologies it was made a comparison between a clustering for ckMeans, K-Means and FCM algorithms (since the algorithm proposed in this paper to calculate the centers is similar to the K-Means) considering three different distances. We used several known databases. In this case, the results of Interval ckMeans were compared with the results of other clustering algorithms when applied to an interval database with minimum and maximum temperature of the month for a given year, referring to 37 cities distributed across continents
Uma análise experimental de algoritmos exatos aplicados ao problema da árvore geradora multiobjetivo
Resumo:
The Multiobjective Spanning Tree Problem is NP-hard and models applications in several areas. This research presents an experimental analysis of different strategies used in the literature to develop exact algorithms to solve the problem. Initially, the algorithms are classified according to the approaches used to solve the problem. Features of two or more approaches can be found in some of those algorithms. The approaches investigated here are: the two-stage method, branch-and-bound, k-best and the preference-based approach. The main contribution of this research lies in the fact that no research was presented to date reporting a systematic experimental analysis of exact algorithms for the Multiobjective Spanning Tree Problem. Therefore, this work can be a basis for other research that deal with the same problem. The computational experiments compare the performance of algorithms regarding processing time, efficiency based on the number of objectives and number of solutions found in a controlled time interval. The analysis of the algorithms was performed for known instances of the problem, as well as instances obtained from a generator commonly used in the literature
Resumo:
The Hiker Dice was a game recently proposed in a software designed by Mara Kuzmich and Leonardo Goldbarg. In the game a dice is responsible for building a trail on an n x m board. As the dice waits upon a cell on the board, it prints the side that touches the surface. The game shows the Hamiltonian Path Problem Simple Maximum Hiker Dice (Hidi-CHS) in trays Compact Nth , this problem is then characterized by looking for a Hamiltonian Path that maximize the sum of marked sides on the board. The research now related, models the problem through Graphs, and proposes two classes of solution algorithms. The first class, belonging to the exact algorithms, is formed by a backtracking algorithm planed with a return through logical rules and limiting the best found solution. The second class of algorithms is composed by metaheuristics type Evolutionary Computing, Local Ramdomized search and GRASP (Greed Randomized Adaptative Search). Three specific operators for the algorithms were created as follows: restructuring, recombination with two solutions and random greedy constructive.The exact algorithm was teste on 4x4 to 8x8 boards exhausting the possibility of higher computational treatment of cases due to the explosion in processing time. The heuristics algorithms were tested on 5x5 to 14x14 boards. According to the applied methodology for evaluation, the results acheived by the heuristics algorithms suggests a better performance for the GRASP algorithm
Resumo:
Data clustering is applied to various fields such as data mining, image processing and pattern recognition technique. Clustering algorithms splits a data set into clusters such that elements within the same cluster have a high degree of similarity, while elements belonging to different clusters have a high degree of dissimilarity. The Fuzzy C-Means Algorithm (FCM) is a fuzzy clustering algorithm most used and discussed in the literature. The performance of the FCM is strongly affected by the selection of the initial centers of the clusters. Therefore, the choice of a good set of initial cluster centers is very important for the performance of the algorithm. However, in FCM, the choice of initial centers is made randomly, making it difficult to find a good set. This paper proposes three new methods to obtain initial cluster centers, deterministically, the FCM algorithm, and can also be used in variants of the FCM. In this work these initialization methods were applied in variant ckMeans.With the proposed methods, we intend to obtain a set of initial centers which are close to the real cluster centers. With these new approaches startup if you want to reduce the number of iterations to converge these algorithms and processing time without affecting the quality of the cluster or even improve the quality in some cases. Accordingly, cluster validation indices were used to measure the quality of the clusters obtained by the modified FCM and ckMeans algorithms with the proposed initialization methods when applied to various data sets
Resumo:
This work discusses the application of techniques of ensembles in multimodal recognition systems development in revocable biometrics. Biometric systems are the future identification techniques and user access control and a proof of this is the constant increases of such systems in current society. However, there is still much advancement to be developed, mainly with regard to the accuracy, security and processing time of such systems. In the search for developing more efficient techniques, the multimodal systems and the use of revocable biometrics are promising, and can model many of the problems involved in traditional biometric recognition. A multimodal system is characterized by combining different techniques of biometric security and overcome many limitations, how: failures in the extraction or processing the dataset. Among the various possibilities to develop a multimodal system, the use of ensembles is a subject quite promising, motivated by performance and flexibility that they are demonstrating over the years, in its many applications. Givin emphasis in relation to safety, one of the biggest problems found is that the biometrics is permanently related with the user and the fact of cannot be changed if compromised. However, this problem has been solved by techniques known as revocable biometrics, which consists of applying a transformation on the biometric data in order to protect the unique characteristics, making its cancellation and replacement. In order to contribute to this important subject, this work compares the performance of individual classifiers methods, as well as the set of classifiers, in the context of the original data and the biometric space transformed by different functions. Another factor to be highlighted is the use of Genetic Algorithms (GA) in different parts of the systems, seeking to further maximize their eficiency. One of the motivations of this development is to evaluate the gain that maximized ensembles systems by different GA can bring to the data in the transformed space. Another relevant factor is to generate revocable systems even more eficient by combining two or more functions of transformations, demonstrating that is possible to extract information of a similar standard through applying different transformation functions. With all this, it is clear the importance of revocable biometrics, ensembles and GA in the development of more eficient biometric systems, something that is increasingly important in the present day