921 resultados para Motor Vehicles by Power Source.
Resumo:
he performance of an induction motor fed by PWM inverters is mainly determined by the harmonic contents of the output voltage. This paper presents a method of numerically calculating the harmonics in the output voltage waveform. Equal pulse-width modulation and siunsoidal PWM are studied. Analysis has been done for single-phase and three-phase bridge inverters. A systematic procedure is given for computing the harmonics and the results are. tabulated.
Resumo:
It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
Part I
The physical phenomena which will ultimately limit the packing density of planar bipolar and MOS integrated circuits are examined. The maximum packing density is obtained by minimizing the supply voltage and the size of the devices. The minimum size of a bipolar transistor is determined by junction breakdown, punch-through and doping fluctuations. The minimum size of a MOS transistor is determined by gate oxide breakdown and drain-source punch-through. The packing density of fully active bipolar or static non-complementary MOS circuits becomes limited by power dissipation. The packing density of circuits which are not fully active such as read-only memories, becomes limited by the area occupied by the devices, and the frequency is limited by the circuit time constants and by metal migration. The packing density of fully active dynamic or complementary MOS circuits is limited by the area occupied by the devices, and the frequency is limited by power dissipation and metal migration. It is concluded that read-only memories will reach approximately the same performance and packing density with MOS and bipolar technologies, while fully active circuits will reach the highest levels of integration with dynamic MOS or complementary MOS technologies.
Part II
Because the Schottky diode is a one-carrier device, it has both advantages and disadvantages with respect to the junction diode which is a two-carrier device. The advantage is that there are practically no excess minority carriers which must be swept out before the diode blocks current in the reverse direction, i.e. a much faster recovery time. The disadvantage of the Schottky diode is that for a high voltage device it is not possible to use conductivity modulation as in the p i n diode; since charge carriers are of one sign, no charge cancellation can occur and current becomes space charge limited. The Schottky diode design is developed in Section 2 and the characteristics of an optimally designed silicon Schottky diode are summarized in Fig. 9. Design criteria and quantitative comparison of junction and Schottky diodes is given in Table 1 and Fig. 10. Although somewhat approximate, the treatment allows a systematic quantitative comparison of the devices for any given application.
Part III
We interpret measurements of permittivity of perovskite strontium titanate as a function of orientation, temperature, electric field and frequency performed by Dr. Richard Neville. The free energy of the crystal is calculated as a function of polarization. The Curie-Weiss law and the LST relation are verified. A generalized LST relation is used to calculate the permittivity of strontium titanate from zero to optic frequencies. Two active optic modes are important. The lower frequency mode is attributed mainly to motion of the strontium ions with respect to the rest of the lattice, while the higher frequency active mode is attributed to motion of the titanium ions with respect to the oxygen lattice. An anomalous resonance which multi-domain strontium titanate crystals exhibit below 65°K is described and a plausible mechanism which explains the phenomenon is presented.
Resumo:
Two 8-week growth trials were conducted to determine the effect of continuous (CF) versus 2 meals day(-1) (MF) feeding and 30% starch versus 30% glucose diets on the carbohydrate utilization of 9.0-g white sturgeon and 0.56-g hybrid tilapia. The two trials were conducted under similar conditions except that sturgeon were kept at 18.5 degrees C in a flow-through system and tilapia were kept at 26 degrees C in a recirculating system. Significantly (P less than or equal to 0.05) higher specific growth rate (SGR), feed efficiency (FE), protein efficiency ratio (PER), body lipid content and liver glucose-6-phosphate dehydrogenase (G6PDH) and 6-phosphogluconate dehydrogenase (6PGDH) activities were observed in the CF than MF sturgeon. Only SGR, FE and PER were higher in sturgeon fed the starch than the glucose diets. Only higher liver G6PDH and malic enzyme (ME) activities were observed in the CF than MF tilapia but higher SGR, FE, PER and liver G6PDH, 6PGDH and ME activities were observed in tilapia fed the starch diet than those fed the glucose diet. This suggested that carbohydrate utilization by sturgeon was more affected by feeding strategy whereas tilapia was more affected by carbohydrate source. Furthermore, white sturgeon can utilize carbohydrates better than hybrid tilapia regardless of feeding strategy and carbohydrate source.
Resumo:
1.35 mum photoluminescence (PL) with a narrow linewidth of only 19.2 meV at room temperature has been achieved in In0.5Ga0.5As islands structure grown on GaAs (1 0 0) substrate by solid-source molecular beam epitaxy. Atomic force microscopy (AFM) measurement reveals that the 16-ML-thick In0.5Ga0.5As islands show quite uniform InGaAs mounds morphology along the [ 1(1) over bar 0] direction with a periodicity of about 90 nm in the [1 1 0] direction. Compared with the In0.5Ga0.5As alloy quantum well (QW) of the same width, the In0.5Ga0.5As islands structure always shows a lower PL peak energy and narrower full-width at half-maximum (FWHM), also a stronger PL intensity at low excitation power and more efficient confinement of the carriers. Our results provide important information for optimizing the epitaxial structures of 1.3 mum wavelength quantum dots devices. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. The segmentation is performed by three "copies" of the BCS and FCS, of small, medium, and large scales, wherein the "short-range" and "long-range" interactions within each scale occur over smaller or larger distances, corresponding to the size of the early filters of each scale. A diffusive filling-in operation within the segmented regions at each scale produces coherent surface representations. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.
Resumo:
An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to two large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. Finally, a diffusive filling-in operation within the segmented regions produces coherent visible structures. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.
Resumo:
The Aircraft Accident Statistics and Knowledge (AASK) database is a repository of survivor accounts from aviation accidents. Its main purpose is to store observational and anecdotal data from the actual interviews of the occupants involved in aircraft accidents. The database has wide application to aviation safety analysis, being a source of factual data regarding the evacuation process. It is also key to the development of aircraft evacuation models such as airEXODUS, where insight into how people actually behave during evacuation from survivable aircraft crashes is required. This paper describes recent developments with the database leading to the development of AASK v3.0. These include significantly increasing the number of passenger accounts in the database, the introduction of cabin crew accounts, the introduction of fatality information, improved functionality through the seat plan viewer utility and improved ease of access to the database via the internet. In addition, the paper demonstrates the use of the database by investigating a number of important issues associated with aircraft evacuation. These include issues associated with social bonding and evacuation, the relationship between the number of crew and evacuation efficiency, frequency of exit/slide failures in accidents and exploring possible relationships between seating location and chances of survival. Finally, the passenger behavioural trends described in analysis undertaken with the earlier database are confirmed with the wider data set.
Resumo:
The problem to be examined here is the fluctuating pressure distribution along the open cavity of the sun-roof at the top of a car compartment due to gusts passing over the sun-roof. The aim of this test is to investigate the capability of a typical commercial CFD package, PHOENICS, in recognising pressure fluctuations occurring in an important automotive industrial problem. In particular to examine the accuracy of transporting pulsatory gusts traveling along the main flow through the use of finite volume methods with higher order schemes in the numercial solutins of the unsteady compressible Navier-Stokes equations. The Helmholtz equation is used to solve the sound distribution inside the car compartment, resulting from the externally induced fluctuations.
Resumo:
The Aircraft Accident Statistics and Knowledge (AASK) database is a repository of passenger accounts from survivable aviation accidents/incidents compiled from interview data collected by agencies such as the US NTSB. Its main purpose is to store observational and anecdotal data from the actual interviews of the occupants involved in aircraft accidents. The database has wide application to aviation safety analysis, being a source of factual data regarding the evacuation process. It also plays a significant role in the development of the airEXODUS aircraft evacuation model, where insight into how people actually behave during evacuation from survivable aircraft crashes is required. This paper describes the latest version of the database (Version 4.0) and includes some analysis of passenger behavior during actual accidents/incidents.
Resumo:
We report extensive observational data for five of the lowest redshift Super-Luminous Type Ic Supernovae (SL-SNe Ic) discovered to date, namely, PTF10hgi, SN2011ke, PTF11rks, SN2011kf, and SN2012il. Photometric imaging of the transients at +50 to +230 days after peak combined with host galaxy subtraction reveals a luminous tail phase for four of these SL-SNe. A high-resolution, optical, and near-infrared spectrum from xshooter provides detection of a broad He I ?10830 emission line in the spectrum (+50 days) of SN2012il, revealing that at least some SL-SNe Ic are not completely helium-free. At first sight, the tail luminosity decline rates that we measure are consistent with the radioactive decay of 56Co, and would require 1-4 M ? of 56Ni to produce the luminosity. These 56Ni masses cannot be made consistent with the short diffusion times at peak, and indeed are insufficient to power the peak luminosity. We instead favor energy deposition by newborn magnetars as the power source for these objects. A semi-analytical diffusion model with energy input from the spin-down of a magnetar reproduces the extensive light curve data well. The model predictions of ejecta velocities and temperatures which are required are in reasonable agreement with those determined from our observations. We derive magnetar energies of 0.4 <~ E(1051 erg) lsim 6.9 and ejecta masses of 2.3 <~ M ej(M ?) lsim 8.6. The sample of five SL-SNe Ic presented here, combined with SN 2010gx—the best sampled SL-SNe Ic so far—points toward an explosion driven by a magnetar as a viable explanation for all SL-SNe Ic.
Resumo:
It is acknowledged that wind power is a stochastic energy source compared to hydroelectric generation which is easily scheduled. In this paper a scheme for coordinating wind power plant and hydroelectric power plant is presented by using PMUs to measure and control the state of wind and hydro power plants. Hydroelectric generation is proposed as a method of energy reserve and compensation in the context of wind power fluctuation in order to avoid full or partial curtailment of wind generation to benefit wind providers. The feasibility of this proposed scheme is investigated by power flow calculation and stability analysis using the IEEE 30-bus power system model.
Resumo:
Wearable devices performing advanced bio-signal analysis algorithms are aimed to foster a revolution in healthcare provision of chronic cardiac diseases. In this context, energy efficiency is of paramount importance, as long-term monitoring must be ensured while relying on a tiny power source. Operating at a scaled supply voltage, just above the threshold voltage, effectively helps in saving substantial energy, but it makes circuits, and especially memories, more prone to errors, threatening the correct execution of algorithms. The use of error detection and correction codes may help to protect the entire memory content, however it incurs in large area and energy overheads which may not be compatible with the tight energy budgets of wearable systems. To cope with this challenge, in this paper we propose to limit the overhead of traditional schemes by selectively detecting and correcting errors only in data highly impacting the end-to-end quality of service of ultra-low power wearable electrocardiogram (ECG) devices. This partition adopts the protection of either significant words or significant bits of each data element, according to the application characteristics (statistical properties of the data in the application buffers), and its impact in determining the output. The proposed heterogeneous error protection scheme in real ECG signals allows substantial energy savings (11% in wearable devices) compared to state-of-the-art approaches, like ECC, in which the whole memory is protected against errors. At the same time, it also results in negligible output quality degradation in the evaluated power spectrum analysis application of ECG signals.