978 resultados para Power train components.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Military doctrine is one of the conceptual components of war. Its raison d’être is that of a force multiplier. It enables a smaller force to take on and defeat a larger force in battle. This article’s departure point is the aphorism of Sir Julian Corbett, who described doctrine as ‘the soul of warfare’. The second dimension to creating a force multiplier effect is forging doctrine with an appropriate command philosophy. The challenge for commanders is how, in unique circumstances, to formulate, disseminate and apply an appropriate doctrine and combine it with a relevant command philosophy. This can only be achieved by policy-makers and senior commanders successfully answering the Clausewitzian question: what kind of conflict are they involved in? Once an answer has been provided, a synthesis of these two factors can be developed and applied. Doctrine has implications for all three levels of war. Tactically, doctrine does two things: first, it helps to create a tempo of operations; second, it develops a transitory quality that will produce operational effect, and ultimately facilitate the pursuit of strategic objectives. Its function is to provide both training and instruction. At the operational level instruction and understanding are critical functions. Third, at the strategic level it provides understanding and direction. Using John Gooch’s six components of doctrine, it will be argued that there is a lacunae in the theory of doctrine as these components can manifest themselves in very different ways at the three levels of war. They can in turn affect the transitory quality of tactical operations. Doctrine is pivotal to success in war. Without doctrine and the appropriate command philosophy military operations cannot be successfully concluded against an active and determined foe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although long regarded as a conduit for the degradation or recycling of cell surface receptors, the endosomal system is also an essential site of signal transduction. Activated receptors accumulate in endosomes, and certain signaling components are exclusively localized to endosomes. Receptors can continue to transmit signals from endosomes that are different from those that arise from the plasma membrane, resulting in distinct physiological responses. Endosomal signaling is widespread in metazoans and plants, where it transmits signals for diverse receptor families that regulate essential processes including growth, differentiation and survival. Receptor signaling at endosomal membranes is tightly regulated by mechanisms that control agonist availability, receptor coupling to signaling machinery, and the subcellular localization of signaling components. Drugs that target mechanisms that initiate and terminate receptor signaling at the plasma membrane are widespread and effective treatments for disease. Selective disruption of receptor signaling in endosomes, which can be accomplished by targeting endosomal-specific signaling pathways or by selective delivery of drugs to the endosomal network, may provide novel therapies for disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a one-port de-embedding technique suitable for the quasi-optical characterization of terahertz integrated components at frequencies beyond the operational range of most vector network analyzers. This technique is also suitable when the manufacturing of precision terminations to sufficiently fine tolerances for the application of a TRL de-embedding technique is not possible. The technique is based on vector reflection measurements of a series of easily realizable test pieces. A theoretical analysis is presented for the precision of the technique when implemented using a quasi-optical null-balanced bridge reflectometer. The analysis takes into account quantization effects in the linear and angular encoders associated with the balancing procedure, as well as source power and detector noise equivalent power. The precision in measuring waveguide characteristic impedance and attenuation using this de-embedding technique is further analyzed after taking into account changes in the power coupled due to axial, rotational, and lateral alignment errors between the device under test and the instruments' test port. The analysis is based on the propagation of errors after assuming imperfect coupling of two fundamental Gaussian beams. The required precision in repositioning the samples at the instruments' test-port is discussed. Quasi-optical measurements using the de-embedding process for a WR-8 adjustable precision short at 125 GHz are presented. The de-embedding methodology may be extended to allow the determination of S-parameters of arbitrary two-port junctions. The measurement technique proposed should prove most useful above 325 GHz where there is a lack of measurement standards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a statistical analysis of the time evolution of ground magnetic fluctuations in three (12–48 s, 24–96 s and 48–192 s) period bands during nightside auroral activations. We use an independently derived auroral activation list composed of both substorms and pseudo-breakups to provide an estimate of the activation times of nightside aurora during periods with comprehensive ground magnetometer coverage. One hundred eighty-one events in total are studied to demonstrate the statistical nature of the time evolution of magnetic wave power during the ∼30 min surrounding auroral activations. We find that the magnetic wave power is approximately constant before an auroral activation, starts to grow up to 90 s prior to the optical onset time, maximizes a few minutes after the auroral activation, then decays slightly to a new, and higher, constant level. Importantly, magnetic ULF wave power always remains elevated after an auroral activation, whether it is a substorm or a pseudo-breakup. We subsequently divide the auroral activation list into events that formed part of ongoing auroral activity and events that had little preceding geomagnetic activity. We find that the evolution of wave power in the ∼10–200 s period band essentially behaves in the same manner through auroral onset, regardless of event type. The absolute power across ULF wave bands, however, displays a power law-like dependency throughout a 30 min period centered on auroral onset time. We also find evidence of a secondary maximum in wave power at high latitudes ∼10 min following isolated substorm activations. Most significantly, we demonstrate that magnetic wave power levels persist after auroral activations for ∼10 min, which is consistent with recent findings of wave-driven auroral precipitation during substorms. This suggests that magnetic wave power and auroral particle precipitation are intimately linked and key components of the substorm onset process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document outlines a practical strategy for achieving an observationally based quantification of direct climate forcing by anthropogenic aerosols. The strategy involves a four-step program for shifting the current assumption-laden estimates to an increasingly empirical basis using satellite observations coordinated with suborbital remote and in situ measurements and with chemical transport models. Conceptually, the problem is framed as a need for complete global mapping of four parameters: clear-sky aerosol optical depth δ, radiative efficiency per unit optical depth E, fine-mode fraction of optical depth ff, and the anthropogenic fraction of the fine mode faf. The first three parameters can be retrieved from satellites, but correlative, suborbital measurements are required for quantifying the aerosol properties that control E, for validating the retrieval of ff, and for partitioning fine-mode δ between natural and anthropogenic components. The satellite focus is on the “A-Train,” a constellation of six spacecraft that will fly in formation from about 2005 to 2008. Key satellite instruments for this report are the Moderate Resolution Imaging Spectroradiometer (MODIS) and Clouds and the Earth's Radiant Energy System (CERES) radiometers on Aqua, the Ozone Monitoring Instrument (OMI) radiometer on Aura, the Polarization and Directionality of Earth's Reflectances (POLDER) polarimeter on the Polarization and Anistropy of Reflectances for Atmospheric Sciences Coupled with Observations from a Lidar (PARASOL), and the Cloud and Aerosol Lider with Orthogonal Polarization (CALIOP) lidar on the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). This strategy is offered as an initial framework—subject to improvement over time—for scientists around the world to participate in the A-Train opportunity. It is a specific implementation of the Progressive Aerosol Retrieval and Assimilation Global Observing Network (PARAGON) program, presented earlier in this journal, which identified the integration of diverse data as the central challenge to progress in quantifying global-scale aerosol effects. By designing a strategy around this need for integration, we develop recommendations for both satellite data interpretation and correlative suborbital activities that represent, in many respects, departures from current practice

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studiesthat use prolonged periods of sensory stimulation report associations between regional reductions in neural activity and negative blood oxygenation level-dependent (BOLD) signaling. However, the neural generators of the negative BOLD response remain to be characterized. Here, we use single-impulse electrical stimulation of the whisker pad in the anesthetized rat to identify components of the neural response that are related to “negative” hemodynamic changes in the brain. Laminar multiunit activity and local field potential recordings of neural activity were performed concurrently withtwo-dimensional optical imaging spectroscopy measuring hemodynamic changes. Repeated measurements over multiple stimulation trials revealed significant variations in neural responses across session and animal datasets. Within this variation, we found robust long-latency decreases (300 and 2000 ms after stimulus presentation) in gammaband power (30 – 80 Hz) in the middle-superficial cortical layers in regions surrounding the activated whisker barrel cortex. This reduction in gamma frequency activity was associated with corresponding decreases in the hemodynamic responses that drive the negative BOLD signal. These findings suggest a close relationship between BOLD responses and neural events that operate over time scales that outlast the initiating sensory stimulus, and provide important insights into the neurophysiological basis of negative neuroimaging signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artificial neural network (NN) is an alternative way (to conventional physical or chemical based modeling technique) to solve complex ill-defined problems. Neural networks trained from historical data are able to handle nonlinear problems and to find the relationship between input data and output data when there is no obvious one between them. Neural Networks has been successfully used in control, robotic, pattern recognition, forecasting areas. This paper presents an application of neural networks in finding some key factors eg. heat loss factor in power station modeling process. In the conventional modeling of power station, these factors such as heat loss are normally determined by experience or “rule of thumb”. To get an accurate estimation of these factors special experiment needs to be carried out and is a very time consuming process. In this paper the neural networks (technique) is used to assist this difficult conventional modeling process. The historical data from a real running brown coal power station in Victoria has been used to train the neural network model and the outcomes of the trained NN model will be used to determine the factors in the conventional energy modeling of the power stations that is under the development as a part of an on-going ARC Linkage project aiming to detail modeling the internal energy flows in the power station.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser shock peening (LSP) is an innovative surface treatment method that can result in significant improvement in the fatigue life of many metallic components. The process produces very little or no surface profile modification while producing a considerably deeper compressive residual stress layer than traditional shot peening operations. The work discussed here was designed to: (a) quantify the fatigue life improvement achieved by LSP in a typical high strength aircraft aluminium alloy and (b) identify any technological risks associated with its use. It is shown that when LSP conditions are optimal for the material and specimen configuration, a —three to four times increase in fatigue life over the as-machined specimens could be achieved for a representative fighter aircraft loading spectrum when applied at a representative load level. However, if the process parameters are not optimal for the material investigated here, fatigue lives of LSP treated specimens may be reduced instead of increased due to the occurrence of internal cracking. This paper details the effect of laser power density on fatigue life of 7050-T7451 aluminium alloy by experimental and numerical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An exercise in historical sociology, this paper investigates the association between training and health made by amateur athletes between about 1860 and WWI. It examines the idea that while exercise benefited a person’s health and well-being, excessive exertion caused potentially life-threatening ‘strain’. The paper sets out the interpretation of contemporary scientific knowledge about the body–which the author terms the ‘physiology of strain’–that underpinned the advice given to those undergoing a training program for amateur competition. The point is made that the imputed effects of exercise on health were deduced from this scientific knowledge; it did not derive from bio-medical investigations specifically addressing these issues. Amateur athletes included people drawn from the professionally educated elite and medical practitioners figured significantly among them. Using insights from Bourdieu and Foucault, it is argued that their social power and professional connections served to legitimate their interpretation of the physiological effects of exercise (denying the value of the training practices of working class professional athletes) and cemented the physiology of strain as a ‘factual’ statement about exercise and health until well into the twentieth century. The data for the paper comes from training manuals, medical journals and other contemporary publications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been demonstrated that considering the knowledge of drive cycle as a priori in the PHEV control strategy can improve its performance. The concept of power cycle instead of drive cycle is introduced to consider the effect of noise factors in the prediction of future drivetrain power demand. To minimize the effect of noise factors, a practical solution for developing a power-cycle library is introduced. A control strategy is developed using the predicted power cycle which inherently improves the optimal operation of engine and consequently improves the vehicle performance. Since the control strategy is formed exclusively for each PHEV rather than a preset strategy which is designed by OEM, the effect of different environmental and geographic conditions, driver behavior, aging of battery and other components are considered for each PHEV. Simulation results show that the control strategy based on the driver library of power cycle would improve both vehicle performance and battery health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid electric vehicles are powered by an electric system and an internal combustion engine. The components of a hybrid electric vehicle need to be coordinated in an optimal manner to deliver the desired performance. This paper presents an approach based on direct method for optimal power management in hybrid electric vehicles with inequality constraints. The approach consists of reducing the optimal control problem to a set of algebraic equations by approximating the state variable which is the energy of electric storage, and the control variable which is the power of fuel consumption. This approximation uses orthogonal functions with unknown coefficients. In addition, the inequality constraints are converted to equal constraints. The advantage of the developed method is that its computational complexity is less than that of dynamic and non-linear programming approaches. Also, to use dynamic or non-linear programming, the problem should be discretized resulting in the loss of optimization accuracy. The propsed method, on the other hand, does not require the discretization of the problem producing more accurate results. An example is solved to demonstrate the accuracy of the proposed approach. The results of Haar wavelets, and Chebyshev and Legendre polynomials are presented and discussed. © 2011 The Korean Society of Automotive Engineers and Springer-Verlag Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A passive deep brain stimulation (DBS) device can be equipped with a rectenna, consisting of an antenna and a rectifier, to harvest energy from electromagnetic fields for its operation. This paper presents optimization of radio frequency rectifier circuits for wireless energy harvesting in a passive head-mountable DBS device. The aim is to achieve a compact size, high conversion efficiency, and high output voltage rectifier. Four different rectifiers based on the Delon doubler, Greinacher voltage tripler, Delon voltage quadrupler, and 2-stage charge pumped architectures are designed, simulated, fabricated, and evaluated. The design and simulation are conducted using Agilent Genesys at operating frequency of 915 MHz. A dielectric substrate of FR-4 with thickness of 1.6 mm, and surface mount devices (SMD) components are used to fabricate the designed rectifiers. The performance of the fabricated rectifiers is evaluated using a 915 MHz radio frequency (RF) energy source. The maximum measured conversion efficiency of the Delon doubler, Greinacher tripler, Delon quadrupler, and 2-stage charge pumped rectifiers are 78, 75, 73, and 76 % at -5 dBm input power and for load resistances of 5-15 kΩ. The conversion efficiency of the rectifiers decreases significantly with the increase in the input power level. The Delon doubler rectifier provides the highest efficiency at both -5 and 5 dBm input power levels, whereas the Delon quadrupler rectifier gives the lowest efficiency for the same inputs. By considering both efficiency and DC output voltage, the charge pump rectifier outperforms the other three rectifiers. Accordingly, the optimised 2-stage charge pumped rectifier is used together with an antenna to harvest energy in our DBS device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents an thermoeconomic analysis of cogeneration plants, applied as a rational technique to produce electric power and saturated steam. The aim of this new methodology is the minimum exergetic manufacturing cost (EMC), based on the Second Law of Thermodynamics. The decision variables selected for the optimization are the pressure and the temperature of the steam leaving the boiler in the case of using steam turbine, and the pressure ratio, turbine exhaust temperature and mass flow in the case of using gas turbines. The equations for calculating the capital costs of the components and products are formulated as a function of these decision variables. An application of the method using real data of a multinational chemical industry located in São Paulo state is presented. The conditions which establish the minimum cost are presented as finals conclusions.