831 resultados para Power and timing optimization
Resumo:
Increased professionalism in rugby has elicited rapid changes in the fitness profile of elite players. Recent research, focusing on the physiological and anthropometrical characteristics of rugby players, and the demands of competition are reviewed. The paucity of research on contemporary elite rugby players is highlighted, along with the need for standardised testing protocols. Recent data reinforce the pronounced differences in the anthropometric and physical characteristics of the forwards and backs. Forwards are typically heavier, taller, and have a greater proportion of body fat than backs. These characteristics are changing, with forwards developing greater total mass and higher muscularity. The forwards demonstrate superior absolute aerobic and anaerobic power, and Muscular strength. Results favour the backs when body mass is taken into account. The scaling of results to body mass can be problematic and future investigations should present results using power function ratios. Recommended tests for elite players include body mass and skinfolds, vertical jump, speed, and the multi-stage shuttle run. Repeat sprint testing is a possible avenue for more specific evaluation of players. During competition, high-intensity efforts are often followed by periods of incomplete recovery. The total work over the duration of a game is lower in the backs compared with the forwards; forwards spend greater time in physical contact with the opposition while the backs spend more time in free running, allowing them to cover greater distances. The intense efforts undertaken by rugby players place considerable stress on anaerobic energy sources, while the aerobic system provides energy during repeated efforts and for recovery. Training should focus on repeated brief high-intensity efforts with short rest intervals to condition players to the demands of the game. Training for the forwards should emphasise the higher work rates of the game, while extended rest periods can be provided to the backs. Players should not only be prepared for the demands of competition, but also the stress of travel and extreme environmental conditions. The greater professionalism of rugby union has increased scientific research in the sport; however, there is scope for significant refinement of investigations on the physiological demands of the game, and sports-specific testing procedures.
Resumo:
An approach based on a linear rate of increase in harvest index (141) with time after anthesis has been used as a simple means-to predict grain growth and yield in many crop simulation models. When applied to diverse situations, however, this approach has been found to introduce significant error in grain yield predictions. Accordingly, this study was undertaken to examine the stability of the HI approach for yield prediction in sorghum [Sorghum bicolor (L.) Moench]. Four field experiments were conducted under nonlimiting water. and N conditions. The experiments were sown at times that ensured a broad range in temperature and radiation conditions. Treatments consisted of two population densities and three genotypes varying in maturity. Frequent sequential harvests were used to monitor crop growth, yield, and the dynamics of 111. Experiments varied greatly in yield and final HI. There was also a tendency for lower HI with later maturity. Harvest index dynamics also varied among experiments and, to a lesser extent, among treatments within experiments. The variation was associated mostly with the linear rate of increase in HI and timing of cessation of that increase. The average rate of HI increase was 0.0198 d(-1), but this was reduced considerably (0.0147) in one experiment that matured in cool conditions. The variations found in IN dynamics could be largely explained by differences in assimilation during grain filling and remobilization of preanthesis assimilate. We concluded that this level of variation in HI dynamics limited the general applicability of the HI approach in yield prediction and suggested a potential alternative for testing.
Resumo:
Talvez não seja nenhum exagero afirmar que há quase um consenso entre os praticantes da Termoeconomia de que a exergia, ao invés de só entalpia, seja a magnitude Termodinâmica mais adequada para ser combinada com o conceito de custo na modelagem termoeconômica, pois esta leva em conta aspectos da Segunda Lei da Termodinâmica e permite identificar as irreversibilidades. Porém, muitas vezes durante a modelagem termoeconômica se usa a exergia desagregada em suas parcelas (química, térmica e mecânica), ou ainda, se inclui a neguentropia que é um fluxo fictício, permitindo assim a desagregação do sistema em seus componentes (ou subsistemas) visando melhorar e detalhar a modelagem para a otimização local, diagnóstico e alocação dos resíduos e equipamentos dissipativos. Alguns autores também afirmam que a desagregação da exergia física em suas parcelas (térmica e mecânica) permite aumentar a precisão dos resultados na alocação de custos, apesar de fazer aumentar a complexidade do modelo termoeconômico e consequentemente os custos computacionais envolvidos. Recentemente alguns autores apontaram restrições e possíveis inconsistências do uso da neguentropia e deste tipo de desagregação da exergia física, propondo assim alternativas para o tratamento de resíduos e equipamentos dissipativos que permitem a desagregação dos sistemas em seus componentes. Estas alternativas consistem, basicamente, de novas propostas de desagregação da exergia física na modelagem termoeconômica. Sendo assim, este trabalho tem como objetivo avaliar as diferentes metodologias de desagregação da exergia física para a modelagem termoeconômica, tendo em conta alguns aspectos como vantagens, restrições, inconsistências, melhoria na precisão dos resultados, aumento da complexidade e do esforço computacional e o tratamento dos resíduos e equipamentos dissipativos para a total desagregação do sistema térmico. Para isso, as diferentes metodologias e níveis de desagregação da exergia física são aplicados na alocação de custos para os produtos finais (potência líquida e calor útil) em diferentes plantas de cogeração considerando como fluido de trabalho tanto o gás ideal bem como o fluido real. Plantas essas com equipamentos dissipativos (condensador ou válvula) ou resíduos (gases de exaustão da caldeira de recuperação). Porém, foi necessário que uma das plantas de cogeração não incorporasse equipamentos dissipativos e nem caldeira de recuperação com o intuito de avaliar isoladamente o efeito da desagregação da exergia física na melhoria da precisão dos resultados da alocação de custos para os produtos finais.
Resumo:
The central goal of this paper is thinking about the Brazilian military power and its linking to the international ambitions of the country in the 21st century. After a comparative analysis to other BRICs and with a historical one about Brazil's strategic irrelevance, we aim to establish what the minimum military capacity Brazil would need in order to meet the country's latest international interests. Similarly, it will be discussed if the National Strategy of Defense, approved in 2008, and the recent strategic agreements signed with France represent one more step toward this minimum military capacity.
Resumo:
The Autonomy Doctrine, elaborated by Juan Carlos Puig, is a realist point of view of International Relations. It is an analysis, from the periphery, about the structure of world power, and a roadmap (from a theoretical point of view) for the longing process of autonomization-regarding hegemonic power-for a country whose ruling class would decide to overcome dependency. The elements its author took into account when analyzing its own context are explained in this text and, afterwards, are reflected over its relevance nowadays. For that purpose, it is necessary to answer certain questions, such as which are the concepts and categories that may explain its relevance, its applicability to regional integration and cooperation models and projects, and what would be the analytical method to compare reality versus ideas, among others. The methodological proposal to analyze the relevance of Puig's doctrine is to compare it to different visions of regionalism that are currently in effect in Latin America.
Resumo:
Exposure to a novel environment triggers the response of several brain areas that regulate emotional behaviors. Here, we studied theta oscillations within the hippocampus (HPC)-amygdala (AMY)-medial prefrontal cortex (mPFC) network in exploration of a novel environment and subsequent familiarization through repeated exposures to that same environment; in addition, we assessed how concomitant stress exposure could disrupt this activity and impair both behavioral processes. Local field potentials were simultaneously recorded from dorsal and ventral hippocampus (dHPC and vHPC respectively), basolateral amygdala (BLA) and mPFC in freely behaving rats while they were exposed to a novel environment, then repeatedly re-exposed over the course of 3 weeks to that same environment and, finally, on re-exposure to a novel unfamiliar environment. A longitudinal analysis of theta activity within this circuit revealed a reduction of vHPC and BLA theta power and vHPC-BLA theta coherence through familiarization which was correlated with a return to normal exploratory behavior in control rats. In contrast, a persistent over-activation of the same brain regions was observed in stressed rats that displayed impairments in novel exploration and familiarization processes. Importantly, we show that stress also affected intra-hippocampal synchrony and heightened the coherence between vHPC and BLA. In summary, we demonstrate that modulatory theta activity in the aforementioned circuit, namely in the vHPC and BLA, is correlated with the expression of anxiety in novelty-induced exploration and familiarization in both normal and pathological conditions.
Resumo:
The increasing need for starches with specific characteristics makes it important to study unconventional starches and their modifications in order to meet consumer demands. The aim of this work was to study physicochemical characteristics of native starch and phosphate starch of S. lycocarpum. Native starch was phosphated with sodium tripolyphosphate (5-11%) added with stirring. Chemical composition, morphology, density, binding ability to cold water, swelling power and solubility index, turbidity and syneresis, rheological and calorimetric properties were determined. Phosphorus was not detected in the native sample, but the phosphating process produced modified starches with phosphorus contents of 0.015, 0.092 and 0.397%, with the capacity of absorbing more water, either cold or hot. Rheological data showed the strong influence of phosphorus content on viscosity of phosphate starch, with lower pasting temperature and peak viscosity higher than those of native starch. Enthalpy was negatively correlated with the phosphorus content, requiring 9.7; 8.5; 8.1 and 6.4 kJ g-1 of energy for the transition from the amorphous to the crystalline state for the starch granules with phosphorus contents of 0; 0.015; 0.092 and 0.397%, respectively. Cluster analysis and principal component analysis showed that starches with 0.015 and 0.092% phosphorus have similar characteristics and are different from the others. Our results show that the characteristics of phosphate modified S. lycocarpum starch have optimal conditions to meet the demands of raw materials, which require greater consistency in stickiness, combined with low rates of retrogradation and syneresis.
Resumo:
This paper is on the problem of short-term hydro, scheduling, particularly concerning head-dependent cascaded hydro systems. We propose a novel mixed-integer quadratic programming approach, considering not only head-dependency, but also discontinuous operating regions and discharge ramping constraints. Thus, an enhanced short-term hydro scheduling is provided due to the more realistic modeling presented in this paper. Numerical results from two case studies, based on Portuguese cascaded hydro systems, illustrate the proficiency of the proposed approach.
Resumo:
Several didactic modules for an electric machinery laboratory are presented. The modules are dedicated for DC machinery control and get their characteristic curves. The didactic modules have a front panel with power and signal connectors and can be configurable for any DC motor type. The three-phase bridge inverter proposed is one of the most popular topologies and is commercially available in power package modules. The control techniques and power drives were designed to satisfy static and dynamic performance of DC machines. Each power section is internally self-protected against misconnections and short-circuits. Isolated output signals of current and voltage measurements are also provided, adding versatility for use either in didactic or research applications. The implementation of such modules allowed experimental confirmation of the expected performance.
Resumo:
All over the world Distributed Generation is seen as a valuable help to get cleaner and more efficient electricity. Under this context distributed generators, owned by different decentralized players can provide a significant amount of the electricity generation. To get negotiation power and advantages of scale economy, these players can be aggregated giving place to a new concept: the Virtual Power Producer. Virtual Power Producers are multi-technology and multi-site heterogeneous entities. Virtual Power Producers should adopt organization and management methodologies so that they can make Distributed Generation a really profitable activity, able to participate in the market. In this paper we address the integration of Virtual Power Producers into an electricity market simulator –MASCEM – as a coalition of distributed producers.
Resumo:
Metaheuristics performance is highly dependent of the respective parameters which need to be tuned. Parameter tuning may allow a larger flexibility and robustness but requires a careful initialization. The process of defining which parameters setting should be used is not obvious. The values for parameters depend mainly on the problem, the instance to be solved, the search time available to spend in solving the problem, and the required quality of solution. This paper presents a learning module proposal for an autonomous parameterization of Metaheuristics, integrated on a Multi-Agent System for the resolution of Dynamic Scheduling problems. The proposed learning module is inspired on Autonomic Computing Self-Optimization concept, defining that systems must continuously and proactively improve their performance. For the learning implementation it is used Case-based Reasoning, which uses previous similar data to solve new cases. In the use of Case-based Reasoning it is assumed that similar cases have similar solutions. After a literature review on topics used, both AutoDynAgents system and Self-Optimization module are described. Finally, a computational study is presented where the proposed module is evaluated, obtained results are compared with previous ones, some conclusions are reached, and some future work is referred. It is expected that this proposal can be a great contribution for the self-parameterization of Metaheuristics and for the resolution of scheduling problems on dynamic environments.
Resumo:
Designing electric installation projects, demands not only academic knowledge, but also other types of knowledge not easily acquired through traditional instructional methodologies. A lot of additional empirical knowledge is missing and so the academic instruction must be completed with different kinds of knowledge, such as real-life practical examples and simulations. On the other hand, the practical knowledge detained by the most experienced designers is not formalized in such a way that is easily transmitted. In order to overcome these difficulties present in the engineers formation, we are developing an Intelligent Tutoring System (ITS), for training and support concerning the development of electrical installation projects to be used by electrical engineers, technicians and students.
Resumo:
Artificial intelligence techniques are being widely used to face the new reality and to provide solutions that can make power systems undergo all the changes while assuring high quality power. In this way, the agents that act in the power industry are gaining access to a generation of more intelligent applications, making use of a wide set of AI techniques. Knowledge-based systems and decision-support systems have been applied in the power and energy industry. This article is intended to offer an updated overview of the application of artificial intelligence in power systems. This article paper is organized in a way so that readers can easily understand the problems and the adequacy of the proposed solutions. Because of space constraints, this approach can be neither complete nor sufficiently deep to satisfy all readers’ needs. As this is amultidisciplinary area, able to attract both software and computer engineering and power system people, this article tries to give an insight into themost important concepts involved in these applications. Complementary material can be found in the reference list, providing deeper and more specific approaches.
Resumo:
Electricity market players operating in a liberalized environment requires access to an adequate decision support tool, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services represent a good negotiation opportunity that must be considered by market players. For this, decision support tool must include ancillary market simulation. This paper proposes two different methods (Linear Programming and Genetic Algorithm approaches) for ancillary services dispatch. The methodologies are implemented in MASCEM, a multi-agent based electricity market simulator. A test case based on California Independent System Operator (CAISO) data concerning the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is included in this paper.
Resumo:
Identity is traditionally defined as an emission concept [1]. Yet, some research points out that there are external factors that can influence it [2]; [3]; [4]. This subject is even more relevant if one considers corporate brands. According to Aaker [5] the number, the power and the credibility of corporate associations are bigger in the case of corporate brands. Literature recognizes the influence of relationships between companies in identity management. Yet, given the increasingly important role of corporate brands, it is surprising that to date no attempt to evaluate that influence has been made in the management of corporate brand identity. Also Keller and Lehman [6] highlight relationships and costumer experience as two areas requiring more investigation. In line with this, the authors intend to develop an empirical research in order to evaluate the influence of relationships between brands in the identity of corporate brand from an internal perspective by interviewing internal stakeholders (brand managers and internal clients). This paper is organized by main contents: theoretical background, research methodology, data analysis and conclusions and finally cues to future investigation.