954 resultados para Power sensitivity model
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
Drop formation from single nozzles under pulsed flow conditions in non-Newtonian fluids following the power law model has been studied. An existing model has been modified to explain the experimental data. The flow conditions employed correspond to the mixer—settler type of operation in pulsed sieve-plate extraction columns. The modified model predicts the drop sizes satisfactorily. It has been found that consideration of non-Newtonian behaviour is important at low pulse intensities and its significance decreases with increasing intensity of pulsation. Further, the proposed model for single orifices has been tested to predict the sizes of drops formed from a sieve-plate distributor having four holes, and has been found to predict the sizes fairly well in the absence of coalescence.
Resumo:
The analysis of lipid compositions from biological samples has become increasingly important. Lipids have a role in cardiovascular disease, metabolic syndrome and diabetes. They also participate in cellular processes such as signalling, inflammatory response, aging and apoptosis. Also, the mechanisms of regulation of cell membrane lipid compositions are poorly understood, partially because a lack of good analytical methods. Mass spectrometry has opened up new possibilities for lipid analysis due to its high resolving power, sensitivity and the possibility to do structural identification by fragment analysis. The introduction of Electrospray ionization (ESI) and the advances in instrumentation revolutionized the analysis of lipid compositions. ESI is a soft ionization method, i.e. it avoids unwanted fragmentation the lipids. Mass spectrometric analysis of lipid compositions is complicated by incomplete separation of the signals, the differences in the instrument response of different lipids and the large amount of data generated by the measurements. These factors necessitate the use of computer software for the analysis of the data. The topic of the thesis is the development of methods for mass spectrometric analysis of lipids. The work includes both computational and experimental aspects of lipid analysis. The first article explores the practical aspects of quantitative mass spectrometric analysis of complex lipid samples and describes how the properties of phospholipids and their concentration affect the response of the mass spectrometer. The second article describes a new algorithm for computing the theoretical mass spectrometric peak distribution, given the elemental isotope composition and the molecular formula of a compound. The third article introduces programs aimed specifically for the analysis of complex lipid samples and discusses different computational methods for separating the overlapping mass spectrometric peaks of closely related lipids. The fourth article applies the methods developed by simultaneously measuring the progress curve of enzymatic hydrolysis for a large number of phospholipids, which are used to determine the substrate specificity of various A-type phospholipases. The data provides evidence that the substrate efflux from bilayer is the key determining factor for the rate of hydrolysis.
Resumo:
Using Ru - SiO2 catalyst, the kinetics of methanation of carbon dioxide has been studied. In the temperature range of 320-460-degrees-C a simple power law model is found to predict experimental results with a good agreement over the range of variables studied.
Resumo:
The generation of a 16 μm laser beam through cascading in a downstream‐mixing CO2 gasdynamic laser is studied. To simulate actual lasing action, a generalized, two‐dimensional, flow‐radiation‐coupled power extraction model for a gasdynamic laser is used. Also, to model the cascade process a new four‐mode CO2‐N2 vibrational kinetic model has been proposed. The steady‐state intensity obtained for an exclusive 9.4 μm transition is of the order of 5×107 W/m2. In the cascade mode of operation the steady‐state intensities for 9.4 and 16 μm transitions of the order of 5×107 W/m2 and 1.0×106 W/m2, respectively, have been obtained.
Resumo:
We present a timing and broad-band pulse-phase-resolved spectral analysis of the transient Be X-ray binary pulsar 1A 1118-61 observed during its outburst in 2009 January using Suzaku observations. The Suzaku observations were made twice, once at the peak of the outburst, and the other 13 d later at its declining phase. Pulse profiles from both observations exhibit strong energy dependence with several peaks at low energies and a single peak above similar to 10 keV. A weak, narrow peak is detected at the main dip of the pulse profiles from both observations in the energy bands below 3 keV, indicating the presence of a phase-dependent soft excess in the source continuum. The broad-band energy spectrum of the pulsar could be fitted well with a partial covering cut-off power-law model and a narrow iron fluorescence line. We also detect a broad cyclotron feature at similar to 50 keV from both observations which is a feature common for accretion-powered pulsars with high magnetic field strength. The pulse-phase-resolved spectral analysis shows an increase in the absorption column density of the partial covering component, as well as variation in the covering fraction at the dips of the pulse profiles, which naturally explains energy dependence of the same. The cyclotron line parameters also show significant variation with pulse phase with an similar to 10 keV variation in the cyclotron line energy and a variation in depth by a factor of 3. This can be explained either as the effect of different viewing angles of the dipole field at different pulse phases, or due to a more complex underlying magnetic field geometry.
Resumo:
Many problems of state estimation in structural dynamics permit a partitioning of system states into nonlinear and conditionally linear substructures. This enables a part of the problem to be solved exactly, using the Kalman filter, and the remainder using Monte Carlo simulations. The present study develops an algorithm that combines sequential importance sampling based particle filtering with Kalman filtering to a fairly general form of process equations and demonstrates the application of a substructuring scheme to problems of hidden state estimation in structures with local nonlinearities, response sensitivity model updating in nonlinear systems, and characterization of residual displacements in instrumented inelastic structures. The paper also theoretically demonstrates that the sampling variance associated with the substructuring scheme used does not exceed the sampling variance corresponding to the Monte Carlo filtering without substructuring. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
We present a simple and practical method for the single-ended distributed fiber temperature measurements using microwave (11-GHz) coherent detection and the instantaneous frequency measurement (IFM) technique to detect spontaneous Brillouin backscattered signal in which a specially designed rf bandpass filter at 11 GHz is used as a frequency discriminator to transform frequency shift to intensity fluctuation. A Brillouin temperature signal can be obtained at 11 GHz over a sensing length of 10 km. The power sensitivity dependence on temperature induced by frequency shift is measured as 2.66%/K. (c) 2007 Society of Photo-Optical Instrumentation Engineers.
Resumo:
abstract {We present a simple and practical method for the single-ended distributed fiber temperature measurements using microwave (11-GHz) coherent detection and the instantaneous frequency measurement (IFM) technique to detect spontaneous Brillouin backscattered signal in which a specially designed rf bandpass filter at 11 GHz is used as a frequency discriminator to transform frequency shift to intensity fluctuation. A Brillouin temperature signal can be obtained at 11 GHz over a sensing length of 10 km. The power sensitivity dependence on temperature induced by frequency shift is measured as 2.66%/K. © 2007 Society of Photo-Optical Instrumentation Engineers.}
Resumo:
Despite their widespread use, there is a paucity of information concerning the effect of storage on the rheological properties of pharmaceutical gels that contain organic and inorganic additives. Therefore, this study examined the effect of storage (1 month at either 4 or 37 degrees C) on the rheological and mechanical properties of gels composed of either hydroxypropylmethylcellulose (3-5% w/w, HPMC) or hydroxyethylcellulose (3-5% w/w, HEC) and containing or devoid of dispersed organic (tetracycline hydrochloride 2% w/w) or inorganic (iron oxide 0.1% w/w) agents. The mechanical properties were measured using texture profile analysis whereas the rheological properties were analyzed using continuous shear rheometry and modeled using the Power Law model. All formulations exhibited pseudoplastic flow with minimal thixotropy. Increasing polymer concentration (3-5% w/w) significantly increased the consistency, hardness, compressibility, and adhesiveness of the formulations due to increased polymer chain entanglement. Following storage (I month at 4 and 37 degrees C) the consistency and mechanical properties of additive free HPMC gets (but not HEC gels) increased, due to the time-dependent development of polymer chain entanglements. Incorporation of tetracycline hydrochloride significantly decreased and increased the rheological and mechanical properties of HPMC and HEC gels, respectively. Conversely, the incorporation of iron oxide did not affect these properties. Following storage, the rheological and mechanical properties of HPMC and HEC formulations were markedly compromised. This effect was greater following storage at 37 than at 4 degrees C and, additionally, greater in the presence of tetracycline hydrochloride than iron oxide. It is suggested that the loss of rheological/mechanical structure was due to chain depolymerization, facilitated by the redox properties of tetracycline hydrochloride and iron oxide. These observations have direct implications for the design and formulation of gels containing an active pharmaceutical ingredient. (c) 2005 Wiley Periodicals, Inc.
Resumo:
Over-frequency generator tripping (OFGT) is used to cut off extra generation to balance power and loads in an isolated system. In this paper the impact of OGFT as a consequence of grid-connected wind farms and under-frequency load shedding (UFLS) is analysed. The paper uses a power system model to demonstrate that wind power fluctuations can readily render OFGT and UFLS maloperation. Using combined hydro and wind generation, the paper proposes a coordinated strategy which resolves problems associated with OFGT and UFLS and preserves system stability.
Resumo:
This paper introduces an algorithm that calculates the dominant eigenvalues (in terms of system stability) of a linear model and neglects the exact computation of the non-dominant eigenvalues. The method estimates all of the eigenvalues using wavelet based compression techniques. These estimates are used to find a suitable invariant subspace such that projection by this subspace will provide one containing the eigenvalues of interest. The proposed algorithm is exemplified by application to a power system model.
Resumo:
This paper presents the measurement, frequency-response modeling and identification, and the corresponding impulse time response of the human respiratory impedance and admittance. The investigated adult patient groups were healthy, diagnosed with chronic obstructive pulmonary disease and kyphoscoliosis, respectively. The investigated children patient groups were healthy, diagnosed with asthma and cystic fibrosis, respectively. Fractional order (FO) models are identified on the measured impedance to quantify the respiratory mechanical properties. Two methods are presented for obtaining and simulating the time-domain impulse response from FO models of the respiratory admittance: (i) the classical pole-zero interpolation proposed by Oustaloup in the early 90s, and (ii) the inverse discrete Fourier Transform (DFT). The results of the identified FO models for the respiratory admittance are presented by means of their average values for each group of patients. Consequently, the impulse time response calculated from the frequency response of the averaged FO models is given by means of the two methods mentioned above. Our results indicate that both methods provide similar impulse response data. However, we suggest that the inverse DFT is a more suitable alternative to the high order transfer functions obtained using the classical Oustaloup filter. Additionally, a power law model is fitted on the impulse response data, emphasizing the intrinsic fractal dynamics of the respiratory system.