999 resultados para Planejamento de experimentos


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing competitiveness of the construction industry, set in an economic environment in which the offer is now greater than the demand , causes the prices of many products and services, are strongly influenced by the processes of production and the final consumer. Thus, to become more competitive in the market and construction companies are seeking new alternatives to reduce and control costs, production processes and tools that allow for close monitoring of the construction schedule, with the consequent compliance deadline with the client. Based on this scenario, the creation of control tools, service management and planning work emerges as an investment opportunity and an area that can promote great benefits to construction companies. The goal of this work is to present a system of planning, service management and costs control that through worksheets provide information relating to the production phase of the work, allowing the visualization of possible irregularities in the planning and cost of the enterprise, enabling the company to take steps to achieve the goals of the enterprise in question, and correct them when necessary. The developed system has been used in a piece of real estate in Rio Grande do Norte, and the results showed that its use together allowed the construction company to accompany their results and take corrective and preventive actions during the production process, efficiently and effective

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The telecommunications play a fundamental role in the contemporary society, having as one of its main roles to give people the possibility to connect them and integrate them into society in which they operate and, therewith, accelerate development through knowledge. But as new technologies are introduced on the market, increases the demand for new products and services that depend on the infrastructure offered, making the problems of planning of telecommunication networks become increasingly large and complex. Many of these problems, however, can be formulated as combinatorial optimization models, and the use of heuristic algorithms can help solve these issues in the planning phase. This paper proposes the development of a Parallel Evolutionary Algorithm to be applied to telecommunications problem known in the literature as SONET Ring Assignment Problem SRAP. This problem is the class NP-hard and arises during the physical planning of a telecommunication network and consists of determining the connections between locations (customers), satisfying a series of constrains of the lowest possible cost. Experimental results illustrate the effectiveness of the Evolutionary Algorithm parallel, over other methods, to obtain solutions that are either optimal or very close to it

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The traditional fishing with rafts is characterized by unpredictability, high stakes and inadequate work conditions. The extensive working hours, physical wear, inadequate nutrition, unsanitary conditions, lack of salvage equipment and instruments suitable working, added by the presence of changes in the nutritional status of fisherman, that contribute to the picture of insecurity in high seas, injuries and health. This study aimed to analyze the activity of the fisherman s from Ponta Negra, Natal / RN, and check the conditions of supply of these fishermen and their implications on health and development of their work. To this finality, was used a methodology based on the ergonomic work analysis employing techniques such as observational and interactional conversational action, listening to the answers, observation protocols, photographic and video records. The script conversational dynamic action was developed from literature searches about the artisanal fisheries, culture and food habits of this population, and analyzes the overall situation of focus and two reference situations. To collect data on the usual diet of fisherman as well as quantitative and qualitative analysis that was used for data analysis and 24h recall the Food Frequency Questionnaire (FFQ). The impact of this power to the health of fisherman was evaluated performing a nutritional assessment. The results revealed that the fishermen carry out their activities with poor working conditions, health and nutrition. Feeding practices of these fishermen undertake development work, making it even more stressful, as well contributing to the emergence of Chronic Noncommunicable Diseases. The management of the activity, as well as the current structure of the vessel, also contributes to the adoption of inappropriate feeding practices during the shipment of catch. The results of this indicate the need for adequate interventions in order to assist in recovery and / or maintenance of health of fisherman minimizing reflections of nutritional disorders for the development activity by improving the quality of life in this population

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an analysis of the current strategic plan of the Judiciary of Rio Grande do Norte emphasizing the evaluation of strategic indicators verifying the effectiveness in implementation, since the implementation of the Balanced Scorecard as a tool for performance evaluation of strategic management. The research presents the strategy map and the evaluation indices of strategic performance reporting on the effectiveness. After literature review and documentary, is making the measurement of indicators that are treated from the standpoint of an exploratory and descriptive in strategic planning used by the judiciary Potiguar. The problem was evaluated qualitatively and quantitatively using statistical techniques for data analysis comparing them between Judiciaries of Brazilian States. With respect to data collection was used performance indicators extracted from the data of Justice in Numbers provided by CNJ the period from 2004 to 2011, and the information sought in the Sector Strategic Planning TJ / RN. The main results of this study are as follows: Acquisition of insight into what level is the strategic planning of the judiciary of Rio Grande do Norte and the evolution of its performance indicators comparing them with the states of RS, CE, SE and the National Judiciary

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Furthered mainly by new technologies, the expansion of distance education has created a demand for tools and methodologies to enhance teaching techniques based on proven pedagogical theories. Such methodologies must also be applied in the so-called Virtual Learning Environments. The aim of this work is to present a planning methodology based on known pedagogical theories which contributes to the incorporation of assessment in the process of teaching and learning. With this in mind, the pertinent literature was reviewed in order to identify the key pedagogical concepts needed to the definition of this methodology and a descriptive approach was used to establish current relations between this conceptual framework and distance education. As a result of this procedure, the Contents Map and the Dependence Map were specified and implemented, two teaching tools that promote the planning of a course by taking into account assessment still in this early stage. Inserted on Moodle, the developed tools were tested in a course of distance learning for practical observation of the involved concepts. It could be verified that the methodology proposed by the above-mentioned tools is in fact helpful in course planning and in strengthening educational assessment, placing the student as central element in the process of teaching and learning

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents the localization and path planning systems for two robots: a non-instrumented humanoid and a slave wheeled robot. The localization of wheeled robot is made using odometry information and landmark detection. These informations are fused using a Extended Kalman Filter. The relative position of humanoid is acquired fusing (using another Kalman Filter) the wheeled robot pose with the characteristics of the landmark on the back of humanoid. Knowing the wheeled robot position and the humanoid relative position in relation to it, we acquired the absolute position of humanoid. The path planning system was developed to provide the cooperative movement of the two robots,incorporating the visibility restrictions of the robotic system

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work introduces a new method for environment mapping with three-dimensional information from visual information for robotic accurate navigation. Many approaches of 3D mapping using occupancy grid typically requires high computacional effort to both build and store the map. We introduce an 2.5-D occupancy-elevation grid mapping, which is a discrete mapping approach, where each cell stores the occupancy probability, the height of the terrain at current place in the environment and the variance of this height. This 2.5-dimensional representation allows that a mobile robot to know whether a place in the environment is occupied by an obstacle and the height of this obstacle, thus, it can decide if is possible to traverse the obstacle. Sensorial informations necessary to construct the map is provided by a stereo vision system, which has been modeled with a robust probabilistic approach, considering the noise present in the stereo processing. The resulting maps favors the execution of tasks like decision making in the autonomous navigation, exploration, localization and path planning. Experiments carried out with a real mobile robots demonstrates that this proposed approach yields useful maps for robot autonomous navigation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho apresenta o desenvolvimento de um método de coordenação e cooperação para uma frota de mini-robôs móveis. O escopo do desenvolvimento é o futebol de robôs. Trata-se de uma plataforma bem estruturada, dinâmica e desenvolvida no mundo inteiro. O futebol de robôs envolve diversos campos do conhecimento incluindo: visão computacional, teoria de controle, desenvolvimento de circuitos microcontrolados, planejamento cooperativo, entre outros. A título de organização os sistema foi dividido em cinco módulos: robô, visão, localização, planejamento e controle. O foco do trabalho se limita ao módulo de planejamento. Para auxiliar seu desenvolvimento um simulador do sistema foi implementado. O simulador funciona em tempo real e substitui os robôs reais. Dessa forma os outros módulos permanecem praticamente inalterados durante uma simulação ou execução com robôs reais. Para organizar o comportamento dos robôs e produzir a cooperação entre eles foi adotada uma arquitetura hierarquizada: no mais alto nível está a escolha do estilo de jogo do time; logo abaixo decide-se o papel que cada jogador deve assumir; associado ao papel temos uma ação específica e finalmente calcula-se a referência de movimento do robô. O papel de um robô dita o comportamento do robô na dada ocasião. Os papéis são alocados dinamicamente durante o jogo de forma que um mesmo robô pode assumir diferentes papéis no decorrer da partida

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose in this work a software architecture for robotic boats intended to act in diverse aquatic environments, fully autonomously, performing telemetry to a base station and getting this mission to be accomplished. This proposal aims to apply within the project N-Boat Lab NatalNet DCA, which aims to empower a sailboat navigating autonomously. The constituent components of this architecture are the memory modules, strategy, communication, sensing, actuation, energy, security and surveillance, making these systems the boat and base station. To validate the simulator was developed in C language and implemented using the graphics API OpenGL resources, whose main results were obtained in the implementation of memory, performance and strategy modules, more specifically data sharing, control of sails and rudder and planning short routes based on an algorithm for navigation, respectively. The experimental results, shown in this study indicate the feasibility of the actual use of the software architecture developed and their application in the area of autonomous mobile robotics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work deals with the kinetics assay of Cajá (Spondias mombin L.) bagasse drying by an experimental design using a tray dryer. In order to add-value to this product a kinetic study has been carried out. A central composite experimental design has been carried out to evaluate the influence of the operational variables: input air temperature (55; 65 e 75ºC); the drying air velocity (3.2; 4.6 e 6.0 m/s) and the fixed bed thickness (0.8; 1.2 e 1.6 cm) and as response variable the the moisture content (dry basis). The results showed that the diffusional Fick model fitted quite well the experimental data. The best condition found has been input air temperature of 75ºC, drying air velocity of 6.0 m/s as well as fixed bed thickness of 0.8 cm. The experimental design assay showed that the main effects as well as the second ones were significant at 95% confindance level. The best operational condition according to statistical planning was 75 oC input air temperature, 6.0 m.s-1 drying air velocity and 0.8 cm fixed bed thickness. In this case, the equilibrium moisture content (1.3% dry basis) occured at 220 minutes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With water pollution increment at the last years, so many progresses in researches about treatment of contaminated waters have been developed. In wastewaters containing highly toxic organic compounds, which the biological treatment cannot be applied, the Advanced Oxidation Processes (AOP) is an alternative for degradation of nonbiodegradable and toxic organic substances, because theses processes are generation of hydroxyl radical based on, a highly reactivate substance, with ability to degradate practically all classes of organic compounds. In general, the AOP request use of special ultraviolet (UV) lamps into the reactors. These lamps present a high electric power demand, consisting one of the largest problems for the application of these processes in industrial scale. This work involves the development of a new photochemistry reactor composed of 12 low cost black light fluorescent lamps (SYLVANIA, black light, 40 W) as UV radiation source. The studied process was the photo-Fenton system, a combination of ferrous ions, hydrogen peroxide, and UV radiation, it has been employed for the degradation of a synthetic wastewater containing phenol as pollutant model, one of the main pollutants in the petroleum industry. Preliminary experiments were carrier on to estimate operational conditions of the reactor, besides the effects of the intensity of radiation source and lamp distribution into the reactor. Samples were collected during the experiments and analyzed for determining to dissolved organic carbon (DOC) content, using a TOC analyzer Shimadzu 5000A. The High Performance Liquid Chromatography (HPLC) was also used for identification of the cathecol and hydroquinone formed during the degradation process of the phenol. The actinometry indicated 9,06⋅1018 foton⋅s-1 of photons flow, for 12 actived lamps. A factorial experimental design was elaborated which it was possible to evaluate the influence of the reactants concentration (Fe2+ and H2O2) and to determine the most favorable experimental conditions ([Fe2+] = 1,6 mM and [H2O2] = 150,5 mM). It was verified the increase of ferrous ions concentration is favorable to process until reaching a limit when the increase of ferrous ions presents a negative effect. The H2O2 exhibited a positive effect, however, in high concentrations, reaching a maximum ratio degradation. The mathematical modeling of the process was accomplished using the artificial neural network technique

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The decontamination of the materials has been subject of some studies. One of the factors that it increases the pollution is the lack of responsibility in the discarding of toxic trash, as for example the presence of PCB (Polychlorinated Biphenyls) in the environment. In the Brazilian regulations, the material contaminated with PCB in concentrations higher than 50 ppm must be stored in special places or destroyed, usually by incineration in plasma furnace with dual steps. Due to high cost of the procedure, new methodologies of PCBs removal has been studied. The objective of this study was to develop an experimental methodology and analytical methodology for quantification of removal of PCBs through out the processes of extractions using supercritical fluid and Soxhlet method, also technical efficiency of the two processes of extraction, in the treatment of contaminated materials with PCBs. The materials studied were soils and wood, both were simulated contamination with concentration of 6.000, 33.000 and 60.000 mg of PCB/ kg of materials. Soxhlet extractions were performed using 100 ml of hexane, and temperature of 180 ºC. Extractions by fluid supercritical were performed at conditions of 200 bar, 70°C, and supercritical CO2 flow-rate of 3 g/min for 1-3 hours. The extracts obtained were quantified using Gas chromatography-mass spectrometry (GC/MS). The conventional extractions were made according to factorial experimental planning technique 22, with aim of study the influence of two variables of process extraction for the Soxhlet method: contaminant concentration and extraction time for obtain a maximum removal of PCB in the materials. The extractions for Soxhlet method were efficient for extraction of PCBs in soil and wood in both solvent studied (hexane and ethanol). In the experimental extraction in soils, the better efficient of removal of PCBs using ethanol as solvent was 81.3% than 95% for the extraction using hexane as solvent, for equal time of extraction. The results of the extraction with wood showed statistically it that there is not difference between the extractions in both solvent studied. The supercritical fluid extraction in the conditions studied showed better efficiency in the extraction of PCBs in the wood matrix than in soil, for two hours extractions the obtain percentual of 43.9 ± 0.5 % for the total of PCBs extracted in the soils against 95.1 ± 0,5% for the total of PCBs extracted in the wood. The results demonstrated that the extractions were satisfactory for both technical studied