931 resultados para Discrete Time Domain
Resumo:
Heat transfer is considered as one of the most critical issues for design and implement of large-scale microwave heating systems, in which improvement of the microwave absorption of materials and suppression of uneven temperature distribution are the two main objectives. The present work focuses on the analysis of heat transfer in microwave heating for achieving highly efficient microwave assisted steelmaking through the investigations on the following aspects: (1) characterization of microwave dissipation using the derived equations, (2) quantification of magnetic loss, (3) determination of microwave absorption properties of materials, (4) modeling of microwave propagation, (5) simulation of heat transfer, and (6) improvement of microwave absorption and heating uniformity. Microwave heating is attributed to the heat generation in materials, which depends on the microwave dissipation. To theoretically characterize microwave heating, simplified equations for determining the transverse electromagnetic mode (TEM) power penetration depth, microwave field attenuation length, and half-power depth of microwaves in materials having both magnetic and dielectric responses were derived. It was followed by developing a simplified equation for quantifying magnetic loss in materials under microwave irradiation to demonstrate the importance of magnetic loss in microwave heating. The permittivity and permeability measurements of various materials, namely, hematite, magnetite concentrate, wüstite, and coal were performed. Microwave loss calculations for these materials were carried out. It is suggested that magnetic loss can play a major role in the heating of magnetic dielectrics. Microwave propagation in various media was predicted using the finite-difference time-domain method. For lossy magnetic dielectrics, the dissipation of microwaves in the medium is ascribed to the decay of both electric and magnetic fields. The heat transfer process in microwave heating of magnetite, which is a typical magnetic dielectric, was simulated by using an explicit finite-difference approach. It is demonstrated that the heat generation due to microwave irradiation dominates the initial temperature rise in the heating and the heat radiation heavily affects the temperature distribution, giving rise to a hot spot in the predicted temperature profile. Microwave heating at 915 MHz exhibits better heating homogeneity than that at 2450 MHz due to larger microwave penetration depth. To minimize/avoid temperature nonuniformity during microwave heating the optimization of object dimension should be considered. The calculated reflection loss over the temperature range of heating is found to be useful for obtaining a rapid optimization of absorber dimension, which increases microwave absorption and achieves relatively uniform heating. To further improve the heating effectiveness, a function for evaluating absorber impedance matching in microwave heating was proposed. It is found that the maximum absorption is associated with perfect impedance matching, which can be achieved by either selecting a reasonable sample dimension or modifying the microwave parameters of the sample.
Resumo:
Wind power based generation has been rapidly growing world-wide during the recent past. In order to transmit large amounts of wind power over long distances, system planners may often add series compensation to existing transmission lines owing to several benefits such as improved steady-state power transfer limit, improved transient stability, and efficient utilization of transmission infrastructure. Application of series capacitors has posed resonant interaction concerns such as through subsynchronous resonance (SSR) with conventional turbine-generators. Wind turbine-generators may also be susceptible to such resonant interactions. However, not much information is available in literature and even engineering standards are yet to address these issues. The motivation problem for this research is based on an actual system switching event that resulted in undamped oscillations in a 345-kV series-compensated, typical ring-bus power system configuration. Based on time-domain ATP (Alternative Transients Program) modeling, simulations and analysis of system event records, the occurrence of subsynchronous interactions within the existing 345-kV series-compensated power system has been investigated. Effects of various small-signal and large-signal power system disturbances with both identical and non-identical wind turbine parameters (such as with a statistical-spread) has been evaluated. Effect of parameter variations on subsynchronous oscillations has been quantified using 3D-DFT plots and the oscillations have been identified as due to electrical self-excitation effects, rather than torsional interaction. Further, the generator no-load reactance and the rotor-side converter inner-loop controller gains have been identified as bearing maximum sensitivity to either damping or exacerbating the self-excited oscillations. A higher-order spectral analysis method based on modified Prony estimation has been successfully applied to the field records identifying dominant 9.79 Hz subsynchronous oscillations. Recommendations have been made for exploring countermeasures.
Resumo:
The selective catalytic reduction system is a well established technology for NOx emissions control in diesel engines. A one dimensional, single channel selective catalytic reduction (SCR) model was previously developed using Oak Ridge National Laboratory (ORNL) generated reactor data for an iron-zeolite catalyst system. Calibration of this model to fit the experimental reactor data collected at ORNL for a copper-zeolite SCR catalyst is presented. Initially a test protocol was developed in order to investigate the different phenomena responsible for the SCR system response. A SCR model with two distinct types of storage sites was used. The calibration process was started with storage capacity calculations for the catalyst sample. Then the chemical kinetics occurring at each segment of the protocol was investigated. The reactions included in this model were adsorption, desorption, standard SCR, fast SCR, slow SCR, NH3 Oxidation, NO oxidation and N2O formation. The reaction rates were identified for each temperature using a time domain optimization approach. Assuming an Arrhenius form of the reaction rates, activation energies and pre-exponential parameters were fit to the reaction rates. The results indicate that the Arrhenius form is appropriate and the reaction scheme used allows the model to fit to the experimental data and also for use in real world engine studies.
Resumo:
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
Resumo:
Real time battery impedance spectrum is acquired using one time record, Compensated Synchronous Detection (CSD). This parallel method enables battery diagnostics. The excitation current to a test battery is a sum of equal amplitude sin waves of a few frequencies spread over range of interest. The time profile of this signal has duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known, synchronous detection processes the time record and each component, both magnitude and phase, is obtained. For compensation, the components, except the one of interest, are reassembled in the time domain. The resulting signal is subtracted from the original signal and the component of interest is synchronously detected. This process is repeated for each component.
Resumo:
OBJECTIVE: Hierarchical modeling has been proposed as a solution to the multiple exposure problem. We estimate associations between metabolic syndrome and different components of antiretroviral therapy using both conventional and hierarchical models. STUDY DESIGN AND SETTING: We use discrete time survival analysis to estimate the association between metabolic syndrome and cumulative exposure to 16 antiretrovirals from four drug classes. We fit a hierarchical model where the drug class provides a prior model of the association between metabolic syndrome and exposure to each antiretroviral. RESULTS: One thousand two hundred and eighteen patients were followed for a median of 27 months, with 242 cases of metabolic syndrome (20%) at a rate of 7.5 cases per 100 patient years. Metabolic syndrome was more likely to develop in patients exposed to stavudine, but was less likely to develop in those exposed to atazanavir. The estimate for exposure to atazanavir increased from hazard ratio of 0.06 per 6 months' use in the conventional model to 0.37 in the hierarchical model (or from 0.57 to 0.81 when using spline-based covariate adjustment). CONCLUSION: These results are consistent with trials that show the disadvantage of stavudine and advantage of atazanavir relative to other drugs in their respective classes. The hierarchical model gave more plausible results than the equivalent conventional model.
Resumo:
This paper treats the problem of setting the inventory level and optimizing the buffer allocation of closed-loop flow lines operating under the constant-work-in-process (CONWIP) protocol. We solve a very large but simple linear program that models an entire simulation run of a closed-loop flow line in discrete time to determine a production rate estimate of the system. This approach introduced in Helber, Schimmelpfeng, Stolletz, and Lagershausen (2011) for open flow lines with limited buffer capacities is extended to closed-loop CONWIP flow lines. Via this method, both the CONWIP level and the buffer allocation can be optimized simultaneously. The first part of a numerical study deals with the accuracy of the method. In the second part, we focus on the relationship between the CONWIP inventory level and the short-term profit. The accuracy of the method turns out to be best for such configurations that maximize production rate and/or short-term profit.
Resumo:
Safe disposal of toxic wastes in geologic formations requires minimal water and gas movement in the vicinity of storage areas, Ventilation of repository tunnels or caverns built in solid rock can desaturate the near field up to a distance of meters from the rock surface, even when the surrounding geological formation is saturated and under hydrostatic pressures. A tunnel segment at the Grimsel test site located in the Aare granite of the Bernese Alps (central Switzerland) has been subjected to a resaturation and, subsequently, to a controlled desaturation, Using thermocouple psychrometers (TP) and time domain reflectometry (TDR), the water potentials psi and water contents theta were measured within the unsaturated granodiorite matrix near the tunnel wall at depths between 0 and 160 cm. During the resaturation the water potentials in the first 30 cm from the rock surface changed within weeks from values of less than -1.5 MPa to near saturation. They returned to the negative initial values during desaturation, The dynamics of this saturation-desaturation regime could be monitored very sensitively using the thermocouple psychrometers, The TDR measurements indicated that water contents changed dose to the surface, but at deeper installation depths the observed changes were within the experimental noise. The field-measured data of the desaturation cycle were used to test the predictive capabilities of the hydraulic parameter functions that were derived from the water retention characteristics psi(theta) determined in the laboratory. A depth-invariant saturated hydraulic conductivity k(s) = 3.0 x 10(-11) m s(-1) was estimated from the psi(t) data at all measurement depths, using the one-dimensional, unsaturated water flow and transport model HYDRUS Vogel er al., 1996, For individual measurement depths, the estimated k(s) varied between 9.8 x 10(-12) and 6.1 x 10(-11) The fitted k(s) values fell within the range of previously estimated k(s) for this location and led to a satisfactory description of the data, even though the model did not include transport of water vapor.
Resumo:
A water desaturation zone develops around a tunnel in water-saturated rock when the evaporative water loss at the rock surface is larger than the water flow from the surrounding saturated region of restricted permeability. We describe the methods with which such water desaturation processes in rock materials can be quantified. The water retention characteristic theta(psi) of crystalline rock samples was determined with a pressure membrane apparatus. The negative water potential, identical to the capillary pressure, psi, below the tensiometric range (psi < -0.1 MPa) can be measured with thermocouple psychrometers (TP), and the volumetric water contents, theta, by means of time domain reflectometry (TDR). These standard methods were adapted for measuring the water status in a macroscopically unfissured granodiorite with a total porosity of approximately 0.01. The measured water retention curve of granodiorite samples from the Grimsel test site (central Switzerland) exhibits a shape which is typical for bimodal pore size distributions. The measured bimodality is probably an artifact of a large surface ratio of solid/voids. The thermocouples were installed without a metallic screen using the cavity drilled into the granodiorite as a measuring chamber. The water potentials observed in a cylindrical granodiorite monolith ranged between -0.1 and -3.0 MPa; those near the wall in a ventilated tunnel between -0.1 and -2.2 MPa. Two types of three-rod TDR Probes were used, one as a depth probe inserted into the rock, the other as a surface probe using three copper stripes attached to the surface for detecting water content changes in the rock-to-air boundary. The TDR signal was smoothed with a low-pass filter, and the signal length determined based on the first derivative of the trace. Despite the low porosity of crystalline rock these standard methods are applicable to describe the unsaturated zone in solid rock and may also be used in other consolidated materials such as concrete.
Resumo:
This dissertation explores phase I dose-finding designs in cancer trials from three perspectives: the alternative Bayesian dose-escalation rules, a design based on a time-to-dose-limiting toxicity (DLT) model, and a design based on a discrete-time multi-state (DTMS) model. We list alternative Bayesian dose-escalation rules and perform a simulation study for the intra-rule and inter-rule comparisons based on two statistical models to identify the most appropriate rule under certain scenarios. We provide evidence that all the Bayesian rules outperform the traditional ``3+3'' design in the allocation of patients and selection of the maximum tolerated dose. The design based on a time-to-DLT model uses patients' DLT information over multiple treatment cycles in estimating the probability of DLT at the end of treatment cycle 1. Dose-escalation decisions are made whenever a cycle-1 DLT occurs, or two months after the previous check point. Compared to the design based on a logistic regression model, the new design shows more safety benefits for trials in which more late-onset toxicities are expected. As a trade-off, the new design requires more patients on average. The design based on a discrete-time multi-state (DTMS) model has three important attributes: (1) Toxicities are categorized over a distribution of severity levels, (2) Early toxicity may inform dose escalation, and (3) No suspension is required between accrual cohorts. The proposed model accounts for the difference in the importance of the toxicity severity levels and for transitions between toxicity levels. We compare the operating characteristics of the proposed design with those from a similar design based on a fully-evaluated model that directly models the maximum observed toxicity level within the patients' entire assessment window. We describe settings in which, under comparable power, the proposed design shortens the trial. The proposed design offers more benefit compared to the alternative design as patient accrual becomes slower.
Resumo:
PURPOSE Based on a nation-wide database, this study analysed the influence of methotrexate (MTX), TNF inhibitors and a combination of the two on uveitis occurrence in JIA patients. METHODS Data from the National Paediatric Rheumatological Database in Germany were used in this study. Between 2002 and 2013, data from JIA patients were annually documented at the participating paediatric rheumatological sites. Patients with JIA disease duration of less than 12 months at initial documentation and ≥2 years of follow-up were included in this study. The impact of anti-inflammatory treatment on the occurrence of uveitis was evaluated by discrete-time survival analysis. RESULTS A total of 3,512 JIA patients (mean age 8.3±4.8 years, female 65.7%, ANA-positive 53.2%, mean age at arthritis onset 7.8±4.8 years) fulfilled the inclusion criteria. Mean total follow-up time was 3.6±2.4 years. Uveitis developed in a total of 180 patients (5.1%) within one year after arthritis onset. Uveitis onset after the first year was observed in another 251 patients (7.1%). DMARD treatment in the year before uveitis onset significantly reduced the risk for uveitis: MTX (HR 0.63, p=0.022), TNF inhibitors (HR 0.56, p<0.001) and a combination of the two (HR 0.10, p<0.001). Patients treated with MTX within the first year of JIA had an even a lower uveitis risk (HR 0.29, p<0.001). CONCLUSION The use of DMARDs in JIA patients significantly reduced the risk for uveitis onset. Early MTX use within the first year of disease and the combination of MTX with a TNF inhibitor had the highest protective effect. This article is protected by copyright. All rights reserved.
Resumo:
BACKGROUND: Several parameters of heart rate variability (HRV) have been shown to predict the risk of sudden cardiac death (SCD) in cardiac patients. There is consensus that risk prediction is increased when measuring HRV during specific provocations such as orthostatic challenge. For the first time, we provide data on reproducibility of such a test in patients with a history of acute coronary syndrome. METHODS: Sixty male patients (65+/-8years) with a history of acute coronary syndrome on stable medication were included. HRV was measured in supine (5min) and standing (5min) position on 2 occasions separated by two weeks. For risk assessment relevant time-domain [standard deviation of all R-R intervals (SDNN) and root mean squared standard differences between adjacent R-R intervals (RMSSD)], frequency domain [low-frequency power (LF), high-frequency power (HF) and LF/HF power ratio] and short-term fractal scaling component (DF1) were computed. Absolute reproducibility was assessed with the standard errors of the mean (SEM) and 95% limits of random variation, and relative reproducibility by the intraclass correlation coefficient (ICC). RESULTS: We found comparable SEMs and ICCs in supine position and after an orthostatic challenge test. All ICCs were good to excellent (ICCs between 0.636 and 0.869). CONCLUSIONS: Reproducibility of HRV parameters during orthostatic challenge is good and comparable with supine position.
Resumo:
MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
The discrete-time Markov chain is commonly used in describing changes of health states for chronic diseases in a longitudinal study. Statistical inferences on comparing treatment effects or on finding determinants of disease progression usually require estimation of transition probabilities. In many situations when the outcome data have some missing observations or the variable of interest (called a latent variable) can not be measured directly, the estimation of transition probabilities becomes more complicated. In the latter case, a surrogate variable that is easier to access and can gauge the characteristics of the latent one is usually used for data analysis. ^ This dissertation research proposes methods to analyze longitudinal data (1) that have categorical outcome with missing observations or (2) that use complete or incomplete surrogate observations to analyze the categorical latent outcome. For (1), different missing mechanisms were considered for empirical studies using methods that include EM algorithm, Monte Carlo EM and a procedure that is not a data augmentation method. For (2), the hidden Markov model with the forward-backward procedure was applied for parameter estimation. This method was also extended to cover the computation of standard errors. The proposed methods were demonstrated by the Schizophrenia example. The relevance of public health, the strength and limitations, and possible future research were also discussed. ^
Resumo:
Statistical methods are developed which assess survival data for two attributes; (1) prolongation of life, (2) quality of life. Health state transition probabilities correspond to prolongation of life and are modeled as a discrete-time semi-Markov process. Imbedded within the sojourn time of a particular health state are the quality of life transitions. They reflect events which differentiate perceptions of pain and suffering over a fixed time period. Quality of life transition probabilities are derived from the assumptions of a simple Markov process. These probabilities depend on the health state currently occupied and the next health state to which a transition is made. Utilizing the two forms of attributes the model has the capability to estimate the distribution of expected quality adjusted life years (in addition to the distribution of expected survival times). The expected quality of life can also be estimated within the health state sojourn time making more flexible the assessment of utility preferences. The methods are demonstrated on a subset of follow-up data from the Beta Blocker Heart Attack Trial (BHAT). This model contains the structure necessary to make inferences when assessing a general survival problem with a two dimensional outcome. ^