980 resultados para linear energy
Resumo:
Pós-graduação em Engenharia Elétrica - FEB
Resumo:
Nowadays, the culture of the sugarcane plays an important role regarding the Brazilian reality, especially in the aspect related to the alternative energy sources. In 2009, the municipality of Suzanapolis (SP), in the Brazilian Cerrado, an experiment was conducted with the culture of the sugarcane in a Red eutrophic, with the aim of selecting, using Pearson correlation coefficients, modeling, simple, linear and multiple regressions and spatial correlation, and also the best technological and productive components, to explain the variability of the productivity of the sugarcane. The geostatistical grid was installed in order to collect the data, with 120 sampling points, in an area of 14.53 ha. For the simple linear regressions, the plants population is the component of production that presents the best quadratic correlation with the productivity of the sugarcane, given by: PRO = -0.553**xPOP(2)+16.14*xPOP-15.77. However, for multiple linear regressions, the equation PRO = -21.11+4.92xPOP**+0.76xPUR** is the one that best presents in order to estimate that productivity. Spatially, the best correlation with yield of the sugarcane is also determined by the component of the production population of plants.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Biofuels and their blends with fossil fuel are important energy resources, which production and application have been largely increased internationally. This study focus on the development of a correlation between apparent activation energy (Ea) and NOx emission of the thermal decomposition of three pure fuels: farnasane (renewable diesel from sugar cane), biodiesel and fossil diesel and their blends. Apparent Activation energy was determined by using thermogravimetry and Model-Free Kinetics. NOx emission was obtained from the European Stationary Cycle (ESC) with OM 926LA CONAMA P7/Euro 5 engine. Results showed that there is a linear correlation between apparent activation energy and NOx emission with R2 of 0,9667 considering pure fuels and their blends which is given as: NOx = 2,2514Ea - 96,309. The average absolute error of this correlation is 2.96% with respect to the measured NOx value. The main advantage of this correlation is its capability to predict NOx emission when either a new pure fuel or a blend of fuels is proposed to use in enginees.
Resumo:
Linear resonant harvesters have been the most common type of generators used to scavenge energy from mechanical vibrations. When subject to harmonic excitation, good performance is achieved once the device is tuned so that its natural frequency coincides with the excitation frequency. In such a situation, the average power harvested in a cycle is proportional to the cube of the excitation frequency and inversely proportional to the suspension damping, which is sought to be very low. However, a very low damping involves a relatively long transient in the system response, where the classical formulation adopted for steady-state regimes do not hold. This paper presents an investigation into the design of a linear resonant harvester to scavenge energy from time-limited harmonic excitations involving a transient response, which could be more likely in some practical situations. An application is presented considering train-induced vibrations.
Resumo:
We evaluate the potential for searching for isosinglet neutral heavy leptons (N), such as right-handed neutrinos, in the next generation of e+e- linear colliders, paying special attention to contributions from the reaction γe→WN initiated by photons from beamstrahlung and laser back-scattering. We find that these mechanisms are both competitive and complementary to the standard e+e-→vN annihilation process for producing neutral heavy leptons in these machines and greatly extends the search range over HERA and LEP200.
Resumo:
This study evaluated a nonlinear programming excel workbook PPFR (http://www.fmva.unesp.br/ppfr) for determining the optimum nutrient density and maximize margins. Two experiments were conducted with 240 one-day-old female chicks and 240 one-day-old male chicks distributed in 48 pens (10 chicks per pen, 4 replicates) in a completely randomized design. The treatments include the average price history (2009s and 2010s) for broiler increased and decreased by 25% or 50% (5 treatments to nonlinear feed formulation) and 1 linear feed formulation. Body gain, feed intake, feed conversion were measured at 21, 42 and 56 d of age. Chicks had ad libitum access to feed and water in floor pens with wood shavings as litter. The bio-economic Energy Conversion [BEC= (Total energy intake*Feed weighted cost per kg)/ (Weight gain*kg live chicken cost)] was more sensitive for measuring the bio-economic performance for broilers, and especially with better magnitude. This allowed a better assessment of profitability, the rate of growth and not just energy consumption, the production of broilers, by incorporating energy consumption, allowing for more sensitivity to the new index (BEC). The BEC was demonstrated that the principle of nonlinear formulation minimizes losses significantly (P<0.05), especially under unfavorable conditions the price of chicken in the market. Thus, when considering that a diet of energy supply shows up as the most expensive item of a formulation, it should compose necessarily the formula proposed for a bio-economic index. Thus, there is need to evaluate more accurately, not only the ingredients of a ration, but the impact of nutrients on the stability of a solution, mainly due to the energy requirement. This strategy promotes better accuracy for decision making under conditions of uncertainty, to find alternative post-formulation. From the above, both weight gain and feed conversion, as traditional performance indicators, cannot finalize or predict a performance evaluation of an economic system creating increasingly intense and competitive. Thus, the energy concentration of the diet becomes more important definition to feed formulator, by directly impact profit activity by interactions with the density of nutrients. This allowed a better evaluation of profitability, the rate of energy performance for broilers, by incorporating the energy consumption formula, allowing more sensitivity to the new index (BEC). These data show that nonlinear feed formulation is a toll to offer new opportunities for poultry production to improved profitability.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Optical transition radiation (OTR) plays an important role in beam diagnostics for high energy particle accelerators. Its linear intensity with beam current is a great advantage as compared to fluorescent screens, which are subject to saturation. Moreover, the measurement of the angular distribution of the emitted radiation enables the determination of many beam parameters in a single observation point. However, few works deals with the application of OTR to monitor low energy beams. In this work we describe the design of an OTR based beam monitor used to measure the transverse beam charge distribution of the 1.9-MeV electron beam of the linac injector of the IFUSP microtron using a standard vision machine camera. The average beam current in pulsed operation mode is of the order of tens of nano-Amps. Low energy and low beam current make OTR observation difficult. To improve sensitivity, the beam incidence angle on the target was chosen to maximize the photon flux in the camera field-of-view. Measurements that assess OTR observation (linearity with beam current, polarization, and spectrum shape) are presented, as well as a typical 1.9-MeV electron beam charge distribution obtained from OTR. Some aspects of emittance measurement using this device are also discussed. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4748519]
Resumo:
The design and implementation of a new control scheme for reactive power compensation, voltage regulation and transient stability enhancement for wind turbines equipped with fixed-speed induction generators (IGs) in large interconnected power systems is presented in this study. The low-voltage-ride-through (LVRT) capability is provided by extending the range of the operation of the controlled system to include typical post-fault conditions. A systematic procedure is proposed to design decentralised multi-variable controllers for large interconnected power systems using the linear quadratic (LQ) output-feedback control design method and the controller design procedure is formulated as an optimisation problem involving rank-constrained linear matrix inequality (LMI). In this study, it is shown that a static synchronous compensator (STATCOM) with energy storage system (ESS), controlled via robust control technique, is an effective device for improving the LVRT capability of fixed-speed wind turbines.
Resumo:
We propose an alternative, nonsingular, cosmic scenario based on gravitationally induced particle production. The model is an attempt to evade the coincidence and cosmological constant problems of the standard model (Lambda CDM) and also to connect the early and late time accelerating stages of the Universe. Our space-time emerges from a pure initial de Sitter stage thereby providing a natural solution to the horizon problem. Subsequently, due to an instability provoked by the production of massless particles, the Universe evolves smoothly to the standard radiation dominated era thereby ending the production of radiation as required by the conformal invariance. Next, the radiation becomes subdominant with the Universe entering in the cold dark matter dominated era. Finally, the negative pressure associated with the creation of cold dark matter (CCDM model) particles accelerates the expansion and drives the Universe to a final de Sitter stage. The late time cosmic expansion history of the CCDM model is exactly like in the standard Lambda CDM model; however, there is no dark energy. The model evolves between two limiting (early and late time) de Sitter regimes. All the stages are also discussed in terms of a scalar field description. This complete scenario is fully determined by two extreme energy densities, or equivalently, the associated de Sitter Hubble scales connected by rho(I)/rho(f) = (H-I/H-f)(2) similar to 10(122), a result that has no correlation with the cosmological constant problem. We also study the linear growth of matter perturbations at the final accelerating stage. It is found that the CCDM growth index can be written as a function of the Lambda growth index, gamma(Lambda) similar or equal to 6/11. In this framework, we also compare the observed growth rate of clustering with that predicted by the current CCDM model. Performing a chi(2) statistical test we show that the CCDM model provides growth rates that match sufficiently well with the observed growth rate of structure.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
This work presents the application of Linear Matrix Inequalities to the robust and optimal adjustment of Power System Stabilizers with pre-defined structure. Results of some tests show that gain and zeros adjustments are sufficient to guarantee robust stability and performance with respect to various operating points. Making use of the flexible structure of LMI's, we propose an algorithm that minimizes the norm of the controllers gain matrix while it guarantees the damping factor specified for the closed loop system, always using a controller with flexible structure. The technique used here is the pole placement, whose objective is to place the poles of the closed loop system in a specific region of the complex plane. Results of tests with a nine-machine system are presented and discussed, in order to validate the algorithm proposed. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This study aimed to evaluate the relationship between the cost and energy density of diet consumed in Brazilian households. Data from the Brazilian Household Budget Survey (POF 200812009) were used to identify the main foods and their prices. Similar items were grouped, resulting in a basket of 67 products. Linear programming was applied for the composition of isoenergetic baskets, minimizing the deviation from the average household diet. Restrictions were imposed on the inclusion of items and the energy contribution of the various food groups. A reduction in average cost of diet was applied at intervals of R$0.15 to the lowest possible cost. We identified an inverse association between energy density and cost of diet (p < 0.05), and at the lowest possible cost we obtained the maximum value of energy density Restrictions on the diet's cost resulted in the selection of diets with higher energy density, indicating that cost of diet may lead to the adoption of inadequate diets in Brazil.
Resumo:
Further advances in magnetic hyperthermia might be limited by biological constraints, such as using sufficiently low frequencies and low field amplitudes to inhibit harmful eddy currents inside the patient's body. These incite the need to optimize the heating efficiency of the nanoparticles, referred to as the specific absorption rate (SAR). Among the several properties currently under research, one of particular importance is the transition from the linear to the non-linear regime that takes place as the field amplitude is increased, an aspect where the magnetic anisotropy is expected to play a fundamental role. In this paper we investigate the heating properties of cobalt ferrite and maghemite nanoparticles under the influence of a 500 kHz sinusoidal magnetic field with varying amplitude, up to 134 Oe. The particles were characterized by TEM, XRD, FMR and VSM, from which most relevant morphological, structural and magnetic properties were inferred. Both materials have similar size distributions and saturation magnetization, but strikingly different magnetic anisotropies. From magnetic hyperthermia experiments we found that, while at low fields maghemite is the best nanomaterial for hyperthermia applications, above a critical field, close to the transition from the linear to the non-linear regime, cobalt ferrite becomes more efficient. The results were also analyzed with respect to the energy conversion efficiency and compared with dynamic hysteresis simulations. Additional analysis with nickel, zinc and copper-ferrite nanoparticles of similar sizes confirmed the importance of the magnetic anisotropy and the damping factor. Further, the analysis of the characterization parameters suggested core-shell nanostructures, probably due to a surface passivation process during the nanoparticle synthesis. Finally, we discussed the effect of particle-particle interactions and its consequences, in particular regarding discrepancies between estimated parameters and expected theoretical predictions. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi. org/10.1063/1.4739533]