93 resultados para Planejamento experimentos


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an analysis of the current strategic plan of the Judiciary of Rio Grande do Norte emphasizing the evaluation of strategic indicators verifying the effectiveness in implementation, since the implementation of the Balanced Scorecard as a tool for performance evaluation of strategic management. The research presents the strategy map and the evaluation indices of strategic performance reporting on the effectiveness. After literature review and documentary, is making the measurement of indicators that are treated from the standpoint of an exploratory and descriptive in strategic planning used by the judiciary Potiguar. The problem was evaluated qualitatively and quantitatively using statistical techniques for data analysis comparing them between Judiciaries of Brazilian States. With respect to data collection was used performance indicators extracted from the data of Justice in Numbers provided by CNJ the period from 2004 to 2011, and the information sought in the Sector Strategic Planning TJ / RN. The main results of this study are as follows: Acquisition of insight into what level is the strategic planning of the judiciary of Rio Grande do Norte and the evolution of its performance indicators comparing them with the states of RS, CE, SE and the National Judiciary

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Furthered mainly by new technologies, the expansion of distance education has created a demand for tools and methodologies to enhance teaching techniques based on proven pedagogical theories. Such methodologies must also be applied in the so-called Virtual Learning Environments. The aim of this work is to present a planning methodology based on known pedagogical theories which contributes to the incorporation of assessment in the process of teaching and learning. With this in mind, the pertinent literature was reviewed in order to identify the key pedagogical concepts needed to the definition of this methodology and a descriptive approach was used to establish current relations between this conceptual framework and distance education. As a result of this procedure, the Contents Map and the Dependence Map were specified and implemented, two teaching tools that promote the planning of a course by taking into account assessment still in this early stage. Inserted on Moodle, the developed tools were tested in a course of distance learning for practical observation of the involved concepts. It could be verified that the methodology proposed by the above-mentioned tools is in fact helpful in course planning and in strengthening educational assessment, placing the student as central element in the process of teaching and learning

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents the localization and path planning systems for two robots: a non-instrumented humanoid and a slave wheeled robot. The localization of wheeled robot is made using odometry information and landmark detection. These informations are fused using a Extended Kalman Filter. The relative position of humanoid is acquired fusing (using another Kalman Filter) the wheeled robot pose with the characteristics of the landmark on the back of humanoid. Knowing the wheeled robot position and the humanoid relative position in relation to it, we acquired the absolute position of humanoid. The path planning system was developed to provide the cooperative movement of the two robots,incorporating the visibility restrictions of the robotic system

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work introduces a new method for environment mapping with three-dimensional information from visual information for robotic accurate navigation. Many approaches of 3D mapping using occupancy grid typically requires high computacional effort to both build and store the map. We introduce an 2.5-D occupancy-elevation grid mapping, which is a discrete mapping approach, where each cell stores the occupancy probability, the height of the terrain at current place in the environment and the variance of this height. This 2.5-dimensional representation allows that a mobile robot to know whether a place in the environment is occupied by an obstacle and the height of this obstacle, thus, it can decide if is possible to traverse the obstacle. Sensorial informations necessary to construct the map is provided by a stereo vision system, which has been modeled with a robust probabilistic approach, considering the noise present in the stereo processing. The resulting maps favors the execution of tasks like decision making in the autonomous navigation, exploration, localization and path planning. Experiments carried out with a real mobile robots demonstrates that this proposed approach yields useful maps for robot autonomous navigation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho apresenta o desenvolvimento de um método de coordenação e cooperação para uma frota de mini-robôs móveis. O escopo do desenvolvimento é o futebol de robôs. Trata-se de uma plataforma bem estruturada, dinâmica e desenvolvida no mundo inteiro. O futebol de robôs envolve diversos campos do conhecimento incluindo: visão computacional, teoria de controle, desenvolvimento de circuitos microcontrolados, planejamento cooperativo, entre outros. A título de organização os sistema foi dividido em cinco módulos: robô, visão, localização, planejamento e controle. O foco do trabalho se limita ao módulo de planejamento. Para auxiliar seu desenvolvimento um simulador do sistema foi implementado. O simulador funciona em tempo real e substitui os robôs reais. Dessa forma os outros módulos permanecem praticamente inalterados durante uma simulação ou execução com robôs reais. Para organizar o comportamento dos robôs e produzir a cooperação entre eles foi adotada uma arquitetura hierarquizada: no mais alto nível está a escolha do estilo de jogo do time; logo abaixo decide-se o papel que cada jogador deve assumir; associado ao papel temos uma ação específica e finalmente calcula-se a referência de movimento do robô. O papel de um robô dita o comportamento do robô na dada ocasião. Os papéis são alocados dinamicamente durante o jogo de forma que um mesmo robô pode assumir diferentes papéis no decorrer da partida

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose in this work a software architecture for robotic boats intended to act in diverse aquatic environments, fully autonomously, performing telemetry to a base station and getting this mission to be accomplished. This proposal aims to apply within the project N-Boat Lab NatalNet DCA, which aims to empower a sailboat navigating autonomously. The constituent components of this architecture are the memory modules, strategy, communication, sensing, actuation, energy, security and surveillance, making these systems the boat and base station. To validate the simulator was developed in C language and implemented using the graphics API OpenGL resources, whose main results were obtained in the implementation of memory, performance and strategy modules, more specifically data sharing, control of sails and rudder and planning short routes based on an algorithm for navigation, respectively. The experimental results, shown in this study indicate the feasibility of the actual use of the software architecture developed and their application in the area of autonomous mobile robotics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work deals with the kinetics assay of Cajá (Spondias mombin L.) bagasse drying by an experimental design using a tray dryer. In order to add-value to this product a kinetic study has been carried out. A central composite experimental design has been carried out to evaluate the influence of the operational variables: input air temperature (55; 65 e 75ºC); the drying air velocity (3.2; 4.6 e 6.0 m/s) and the fixed bed thickness (0.8; 1.2 e 1.6 cm) and as response variable the the moisture content (dry basis). The results showed that the diffusional Fick model fitted quite well the experimental data. The best condition found has been input air temperature of 75ºC, drying air velocity of 6.0 m/s as well as fixed bed thickness of 0.8 cm. The experimental design assay showed that the main effects as well as the second ones were significant at 95% confindance level. The best operational condition according to statistical planning was 75 oC input air temperature, 6.0 m.s-1 drying air velocity and 0.8 cm fixed bed thickness. In this case, the equilibrium moisture content (1.3% dry basis) occured at 220 minutes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With water pollution increment at the last years, so many progresses in researches about treatment of contaminated waters have been developed. In wastewaters containing highly toxic organic compounds, which the biological treatment cannot be applied, the Advanced Oxidation Processes (AOP) is an alternative for degradation of nonbiodegradable and toxic organic substances, because theses processes are generation of hydroxyl radical based on, a highly reactivate substance, with ability to degradate practically all classes of organic compounds. In general, the AOP request use of special ultraviolet (UV) lamps into the reactors. These lamps present a high electric power demand, consisting one of the largest problems for the application of these processes in industrial scale. This work involves the development of a new photochemistry reactor composed of 12 low cost black light fluorescent lamps (SYLVANIA, black light, 40 W) as UV radiation source. The studied process was the photo-Fenton system, a combination of ferrous ions, hydrogen peroxide, and UV radiation, it has been employed for the degradation of a synthetic wastewater containing phenol as pollutant model, one of the main pollutants in the petroleum industry. Preliminary experiments were carrier on to estimate operational conditions of the reactor, besides the effects of the intensity of radiation source and lamp distribution into the reactor. Samples were collected during the experiments and analyzed for determining to dissolved organic carbon (DOC) content, using a TOC analyzer Shimadzu 5000A. The High Performance Liquid Chromatography (HPLC) was also used for identification of the cathecol and hydroquinone formed during the degradation process of the phenol. The actinometry indicated 9,06⋅1018 foton⋅s-1 of photons flow, for 12 actived lamps. A factorial experimental design was elaborated which it was possible to evaluate the influence of the reactants concentration (Fe2+ and H2O2) and to determine the most favorable experimental conditions ([Fe2+] = 1,6 mM and [H2O2] = 150,5 mM). It was verified the increase of ferrous ions concentration is favorable to process until reaching a limit when the increase of ferrous ions presents a negative effect. The H2O2 exhibited a positive effect, however, in high concentrations, reaching a maximum ratio degradation. The mathematical modeling of the process was accomplished using the artificial neural network technique

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The decontamination of the materials has been subject of some studies. One of the factors that it increases the pollution is the lack of responsibility in the discarding of toxic trash, as for example the presence of PCB (Polychlorinated Biphenyls) in the environment. In the Brazilian regulations, the material contaminated with PCB in concentrations higher than 50 ppm must be stored in special places or destroyed, usually by incineration in plasma furnace with dual steps. Due to high cost of the procedure, new methodologies of PCBs removal has been studied. The objective of this study was to develop an experimental methodology and analytical methodology for quantification of removal of PCBs through out the processes of extractions using supercritical fluid and Soxhlet method, also technical efficiency of the two processes of extraction, in the treatment of contaminated materials with PCBs. The materials studied were soils and wood, both were simulated contamination with concentration of 6.000, 33.000 and 60.000 mg of PCB/ kg of materials. Soxhlet extractions were performed using 100 ml of hexane, and temperature of 180 ºC. Extractions by fluid supercritical were performed at conditions of 200 bar, 70°C, and supercritical CO2 flow-rate of 3 g/min for 1-3 hours. The extracts obtained were quantified using Gas chromatography-mass spectrometry (GC/MS). The conventional extractions were made according to factorial experimental planning technique 22, with aim of study the influence of two variables of process extraction for the Soxhlet method: contaminant concentration and extraction time for obtain a maximum removal of PCB in the materials. The extractions for Soxhlet method were efficient for extraction of PCBs in soil and wood in both solvent studied (hexane and ethanol). In the experimental extraction in soils, the better efficient of removal of PCBs using ethanol as solvent was 81.3% than 95% for the extraction using hexane as solvent, for equal time of extraction. The results of the extraction with wood showed statistically it that there is not difference between the extractions in both solvent studied. The supercritical fluid extraction in the conditions studied showed better efficiency in the extraction of PCBs in the wood matrix than in soil, for two hours extractions the obtain percentual of 43.9 ± 0.5 % for the total of PCBs extracted in the soils against 95.1 ± 0,5% for the total of PCBs extracted in the wood. The results demonstrated that the extractions were satisfactory for both technical studied

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At the cashew nut processing industry it is often the generation of wastewaters containing high content of toxic organic compounds. The presence of these compounds is due mainly to the so called liquid of the cashew nut (CNSL). CNSL, as it is commercially known in Brazil, is the liquid of the cashew nut. It looks like an oil with dark brown color, viscous and presents a high toxicity index due to the chemical composition, i.e. phenol compounds, such as anacardic acid, cardol, 2-methyl cardol and monophenol (cardanol). These compounds are bio resistant to the conventional treatments. Furthermore, the corresponding wastewaters present high content of TOC (total organic carbon). Therefore due to the high degree of toxicity it is very important to study and develop treatments of these wastewaters before discharge to the environmental. This research aims to decompose these compounds using advanced oxidative processes (AOP) based on the photo-Fenton system. The advantage of this system is the fast and non-selective oxidation promoted by the hydroxyl radicals (●OH), that is under determined conditions can totally convert the organic pollutants to CO2 and H2O. In order to evaluate the decomposition of the organic charge system samples of the real wastewater od a processing cashew nut industry were taken. This industry was located at the country of the state of Rio Grande do Norte. The experiments were carried out with a photochemical annular reactor equipped with UV (ultra violet) lamp. Based on preliminary experiments, a Doehlert experimental design was defined to optimize the concentrations of H2O2 and Fe(II) with a total of 13 runs. The experimental conditions were set to pH equal to 3 and temperature of 30°C. The power of the lamps applied was 80W, 125W and 250W. To evaluate the decomposition rate measures of the TOC were accomplished during 4 hours of experiment. According to the results, the organic removal obtained in terms of TOC was 80% minimum and 95% maximum. Furthermore, it was gotten a minimum time of 49 minutes for the removal of 30% of the initial TOC. Based on the obtained experimental results, the photo-Fenton system presents a very satisfactory performance as a complementary treatment of the wastewater studied

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work had as objective to apply an experimental planning aiming at to improve the efficiency of separation of a new type of mixer-settler applied to treat waste water contaminated with oil. An unity in scale of laboratory, was installed in the Post-graduation Program of Chemical Engineering of UFRN. It was constructed in partnership with Petrobras S.A. This called device Misturador-Decantador a Inversão de Fases (MDIF) , possess features of conventional mixer-settler and spray column type. The equipment is composed of three main parts: mixing chamber; chamber of decantation and chamber of separation. The efficiency of separation is evaluated analyzing the oil concentrations in water in the feed and the output of the device. For the analysis one used the gravimetric method of oil and greases analysis (TOG). The system in study is a water of formation emulsified with oil. The used extractant is a mixture of Turpentine spirit hydro-carbons, supplied for Petrobras. It was applied, for otimization of the efficiency of separation of the equipment, an experimental planning of the composite central type, having as factorial portion fractionary factorial planning 2 5-2, with the magnifying of the type star and five replications in the central point. In this work, the following independents variables were studied: contents of oil in the feed of the device; volumetric ratio (O/A); total flowrate ; agitation in the mixing chamber and height of the organic bed. Minimum and maximum limits for the studied variables had been fixed according previous works. The analysis of variance for the equation of the empirical model, revealed statistically significant and useful results for predictions ends. The variance analysis also presented the distribution of the error as a normal distribution and was observed that as the dispersions do not depend on the levels of the factors, the independence assumption can be verified. The variation around the average is explained by 98.98%, or either, equal to the maximum value, being the smoothing of the model in relation to the experimental points of 0,98981. The results present a strong interaction between the variable oil contents in the feed and agitation in the mixing chamber, having great and positive influence in the separation efficiency. Another variable that presented a great positive influence was the height of the organic bed. The best results of separation efficiency had been obtained for high flowrates when associates the high oil concentrations and high agitation. The results of the present work had shown excellent agreement with the results carried out through previous works with the mixer-settler of phase inversion

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among the pests that attack corn crop in Brazil, there is Spodoptera frugiperda (JE Smith, 1797) (Lepidoptera: Noctuidae), known as fall armyworm, which is the major corn pest. Due to genetic instability during serial passage of baculoviruses in insect cell culture, the viral bioinseticides in vitro production development is the greatest challenge for mass production of this bioproduct. Successive passages of virus using extracellular viruses (BVs), necessary during viral bioinseticides production scaling up, leads to the appearance of aberrant forms of virus, a process so called as "passage effect ". The main consequence of passage effect is the production of occlusion bodies (OB) decrease, preventing its production using in vitro process. In this study, it was carried out a serial passage of baculovirus Spodoptera frugiperda multiple nucleopolyhedrovirus, isolate 18, using Sf21 cells. A decrease in the production of occlusion bodies from 170 to 92 in the third to fourth passage was observed. A factorial experimental design (22) was employed to verify the influence of two input variables, concentration of the hormone 20 - hydroxyecdysone (CH) and cholesterol (CC) on the values of response variables (volumetric and the specific OB production) of the process, seeking to define the optimum operating ranges trying to reverse or minimize the passage effect. The result indicated a negative influence of the cholesterol addition and positive effect in the hormone supplementation which the optimum range found for the concentrations studied were 8 to 10μg/mL and 5 to 6.5 mg / mL, for cholesterol and hormone concentrations respectively. New experiments were performed with addition of hormone and cholesterol in order to check the influence of these additives on the OB production independently. While the best result obtained from the factorial experiment was 9.4 x 107 OB/mL and 128.4 specific OB/cell, with the addition of only 6μg/mL 20-hydroxyecdysone these concentrations increased to 1.9 x 108 OB/mL and 182.9 OB/cell for volumetric and specific OB production, respectively. This result confirms that the addition of the hormone 20-hydroxyecdysone enhances the SfMNPV in vitro production process performance using Sf21 cells