967 resultados para Conditional personal cure rate
Resumo:
A better performing product code vector quantization (VQ) method is proposed for coding the line spectrum frequency (LSF) parameters; the method is referred to as sequential split vector quantization (SeSVQ). The split sub-vectors of the full LSF vector are quantized in sequence and thus uses conditional distribution derived from the previous quantized sub-vectors. Unlike the traditional split vector quantization (SVQ) method, SeSVQ exploits the inter sub-vector correlation and thus provides improved rate-distortion performance, but at the expense of higher memory. We investigate the quantization performance of SeSVQ over traditional SVQ and transform domain split VQ (TrSVQ) methods. Compared to SVQ, SeSVQ saves 1 bit and nearly 3 bits, for telephone-band and wide-band speech coding applications respectively.
Resumo:
A better understanding of stock price changes is important in guiding many economic activities. Since prices often do not change without good reasons, searching for related explanatory variables has involved many enthusiasts. This book seeks answers from prices per se by relating price changes to their conditional moments. This is based on the belief that prices are the products of a complex psychological and economic process and their conditional moments derive ultimately from these psychological and economic shocks. Utilizing information about conditional moments hence makes it an attractive alternative to using other selective financial variables in explaining price changes. The first paper examines the relation between the conditional mean and the conditional variance using information about moments in three types of conditional distributions; it finds that the significance of the estimated mean and variance ratio can be affected by the assumed distributions and the time variations in skewness. The second paper decomposes the conditional industry volatility into a concurrent market component and an industry specific component; it finds that market volatility is on average responsible for a rather small share of total industry volatility — 6 to 9 percent in UK and 2 to 3 percent in Germany. The third paper looks at the heteroskedasticity in stock returns through an ARCH process supplemented with a set of conditioning information variables; it finds that the heteroskedasticity in stock returns allows for several forms of heteroskedasticity that include deterministic changes in variances due to seasonal factors, random adjustments in variances due to market and macro factors, and ARCH processes with past information. The fourth paper examines the role of higher moments — especially skewness and kurtosis — in determining the expected returns; it finds that total skewness and total kurtosis are more relevant non-beta risk measures and that they are costly to be diversified due either to the possible eliminations of their desirable parts or to the unsustainability of diversification strategies based on them.
Resumo:
The increasing variability in device leakage has made the design of keepers for wide OR structures a challenging task. The conventional feedback keepers (CONV) can no longer improve the performance of wide dynamic gates for the future technologies. In this paper, we propose an adaptive keeper technique called rate sensing keeper (RSK) that enables faster switching and tracks the variation across different process corners. It can switch upto 1.9x faster (for 20 legs) than CONV and can scale upto 32 legs as against 20 legs for CONV in a 130-nm 1.2-V process. The delay tracking is within 8% across the different process corners. We demonstrate the circuit operation of RSK using a 32 x 8 register file implemented in an industrial 130-nm 1.2-V CMOS process. The performance of individual dynamic logic gates are also evaluated on chip for various keeper techniques. We show that the RSK technique gives superior performance compared to the other alternatives such as Conditional Keeper (CKP) and current mirror-based keeper (LCR).
Resumo:
This paper considers the problem of power management and throughput maximization for energy neutral operation when using Energy Harvesting Sensors (EHS) to send data over wireless links. It is assumed that the EHS are designed to transmit data at a constant rate (using a fixed modulation and coding scheme) but are power-controlled. A framework under which the system designer can optimize the performance of EHS when the channel is Rayleigh fading is developed. For example, the highest average data rate that can be supported over a Rayleigh fading channel given the energy harvesting capability, the battery power storage efficiency and the maximum allowed transmit energy per slot is derived. Furthermore, the optimum transmission scheme that guarantees a particular data throughput is derived. The usefulness of the framework developed is illustrated through simulation results for specific examples.
Resumo:
A low complexity, essentially-ML decoding technique for the Golden code and the three antenna Perfect code was introduced by Sirianunpiboon, Howard and Calderbank. Though no theoretical analysis of the decoder was given, the simulations showed that this decoding technique has almost maximum-likelihood (ML) performance. Inspired by this technique, in this paper we introduce two new low complexity decoders for Space-Time Block Codes (STBCs)-the Adaptive Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive interference cancellation (ACZF-SIC), which include as a special case the decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC decoders are capable of achieving full-diversity, and we give a set of sufficient conditions for an STBC to give full-diversity with these decoders. We then show that the Golden code, the three and four antenna Perfect codes, the three antenna Threaded Algebraic Space-Time code and the four antenna rate 2 code of Srinath and Rajan are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less than that of their ML decoders. Simulations show that the proposed decoding method performs identical to ML decoding for all these five codes. These STBCs along with the proposed decoding algorithm have the least decoding complexity and best error performance among all known codes for transmit antennas. We further provide a lower bound on the complexity of full-diversity ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding implementation.
Resumo:
8 p.
Resumo:
A multi-dimensional combustion code implementing the Conditional Moment Closure turbulent combustion model interfaced with a well-established RANS two- phase flow field solver has been employed to study a broad range of operating conditions for a heavy duty direct-injection common-rail Diesel engine. These conditions include different loads (25%, 50%, 75% and full load) and engine speeds (1250 and 1830 RPM) and, with respect to the fuel path, different injection timings and rail pressures. A total of nine cases have been simulated. Excellent agreement with experimental data has been found for the pressure traces and the heat release rates, without adjusting any model constants. The chemical mechanism used contains a detailed NOx sub-mechanism. The predicted emissions agree reasonably well with the experimental data considering the range of operating points and given no adjustments of any rate constants have been employed. In an effort to identify CPU cost reduction potential, various dimensionality reduction strategies have been assessed. Furthermore, the sensitivity of the predictions with respect to resolution in particular relating to the CMC grid has been investigated. Overall, the results suggest that the presented modelling strategy has considerable predictive capability concerning Diesel engine combustion without requiring model constant calibration based on experimental data. This is true particularly for the heat release rates predictions and, to a lesser extent, for NOx emissions where further progress is still necessary. © 2009 SAE International.
Resumo:
A numerical method to estimate temperature distribution during the cure of epoxy-terminated poly(phenylene ether ketone) (E-PEK)-based composite is suggested. The effect of the temperature distribution on the selection of cure cycle is evaluated using a suggested alternation criterion. The effect of varying heating rate and thickness on the temperature distribution, viscosity distribution and distribution of the extent of cure reaction are discussed based on the combination of the here-established temperature distribution model and the previously established curing kinetics model and chemorheological model. It is found that, for a thin composite (<=10mm) and low heating rate (<=2.5K/min), the effect of temperature distribution on cure cycle and on the processing window for pressure application can be neglected. Low heating rate is of benefit to reduce the temperature gradient. The processing window for pressure application becomes narrower with increasing thicknesses of composite sheets. The validity of the temperature distribution model and the modified processing window is evaluated through the characterization of mechanical and physical properties of E-PEK-based composite fabricated according to different temperature distribution conditions.
Resumo:
Assuming that daily spot exchange rates follow a martingale process, we derive the implied time series process for the vector of 30-day forward rate forecast errors from using weekly data. The conditional second moment matrix of this vector is modelled as a multivariate generalized ARCH process. The estimated model is used to test the hypothesis that the risk premium is a linear function of the conditional variances and covariances as suggested by the standard asset pricing theory literature. Little supportt is found for this theory; instead lagged changes in the forward rate appear to be correlated with the 'risk premium.'. © 1990.
Resumo:
The curing of conductive adhesives and underfills can save considerable time and offer cost benefits for the microsystems and electronics packaging industry. In contrast to conventional ovens, curing by microwave energy generates heat internally within each individual component of an assembly. The rate at which heat is generated is different for each of the components and depends on the material properties as well as the oven power and frequency. This leads to a very complex and transient thermal state, which is extremely difficult to measure experimentally. Conductive adhesives need to be raised to a minimum temperature to initiate the cross-linking of the resin polymers, whilst some advanced packaging materials currently under investigation impose a maximum temperature constraint to avoid damage. Thermal imagery equipment integrated with the microwave oven can offer some information on the thermal state but such data is based on the surface temperatures. This paper describes computational models that can simulate the internal temperatures within each component of an assembly including the critical region between the chip and substrate. The results obtained demonstrate that due to the small mass of adhesive used in the joints, the temperatures reached are highly dependent on the material properties of the adjacent chip and substrate.
Resumo:
A review of polymer cure models used in microelectronics packaging applications reveals no clear consensus of the chemical rate constants for the cure reactions, or even of an effective model. The problem lies in the contrast between the actual cure process, which involves a sequence of distinct chemical reactions, and the models, which typically assume only one, (or two with some restrictions on the independence of their characteristic constants.) The standard techniques to determine the model parameters are based on differential scanning calorimetry (DSC), which cannot distinguish between the reactions, and hence yields results useful only under the same conditions, which completely misses the point of modeling. The obvious solution is for manufacturers to provide the modeling parameters, but failing that, an alternative experimental technique is required to determine individual reaction parameters, e.g. Fourier transform infra-red spectroscopy (FTIR).
Resumo:
A review of polymer cure models used in microelectronics packaging applications reveals no clear consensus of the chemical rate constants for the cure reactions, or even of an effective model. The problem lies in the contrast between the actual cure process, which involves a sequence of distinct chemical reactions, and the models, which typically assume only one, (or two with some restrictions on the independence of their characteristic constants.) The standard techniques to determine the model parameters are based on differential scanning calorimetry (DSC), which cannot distinguish between the reactions, and hence yields results useful only under the same conditions, which completely misses the point of modeling. The obvious solution is for manufacturers to provide the modeling parameters, but failing that, an alternative experimental technique is required to determine individual reaction parameters, e.g. Fourier transform infra-red spectroscopy (FTIR).