930 resultados para Bit error rate algorithm
Resumo:
In order to understand the earthquake nucleation process, we need to understand the effective frictional behavior of faults with complex geometry and fault gouge zones. One important aspect of this is the interaction between the friction law governing the behavior of the fault on the microscopic level and the resulting macroscopic behavior of the fault zone. Numerical simulations offer a possibility to investigate the behavior of faults on many different scales and thus provide a means to gain insight into fault zone dynamics on scales which are not accessible to laboratory experiments. Numerical experiments have been performed to investigate the influence of the geometric configuration of faults with a rate- and state-dependent friction at the particle contacts on the effective frictional behavior of these faults. The numerical experiments are designed to be similar to laboratory experiments by DIETERICH and KILGORE (1994) in which a slide-hold-slide cycle was performed between two blocks of material and the resulting peak friction was plotted vs. holding time. Simulations with a flat fault without a fault gouge have been performed to verify the implementation. These have shown close agreement with comparable laboratory experiments. The simulations performed with a fault containing fault gouge have demonstrated a strong dependence of the critical slip distance D-c on the roughness of the fault surfaces and are in qualitative agreement with laboratory experiments.
Resumo:
This paper analyses the exchange rate exposure displayed by a sample of Australian international equity trusts (IET). Exchange rate exposure is also examined in the context of differing economic climates with particular emphasis on the Asian crisis in mid-1997. It is found that there is evidence of exchange rate exposure particularly in the context of a multiple exchange rate model. Exposure varies substantially between three alternative time periods with different exposure apparent subsequent to the Asian crisis than prior to this event.
Resumo:
The effect of heating and cooling on heart rate in the estuarine crocodile Crocodylus porosus was studied in response to different heat transfer mechanisms and heat loads. Three heating treatments were investigated. C. porosus were: (1) exposed to a radiant heat source under dry conditions; (2) heated via radiant energy while half-submerged in flowing water at 23degreesC and (3) heated via convective transfer by increasing water temperature from 23degreesC to 35degreesC. Cooling was achieved in all treatments by removing the heat source and with C. porosus half-submerged in flowing water at 23degreesC. In all treatments, the heart rate of C. porosus increased markedly in response to heating and decreased rapidly with the removal of the heat source. Heart rate during heating was significantly faster than during cooling at any given body temperature, i.e. there was a significant heart rate hysteresis. There were two identifiable responses to heating and cooling. During the initial stages of applying or removing the heat source, there was a dramatic increase or decrease in heart rate ('rapid response'), respectively, indicating a possible cardiac reflex. This rapid change in heart rate with only a small change or no change in body temperature (
Resumo:
This paper investigates the robustness of a range of short–term interest rate models. We examine the robustness of these models over different data sets, time periods, sampling frequencies, and estimation techniques. We examine a range of popular one–factor models that allow the conditional mean (drift) and conditional variance (diffusion) to be functions of the current short rate. We find that parameter estimates are highly sensitive to all of these factors in the eight countries that we examine. Since parameter estimates are not robust, these models should be used with caution in practice.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The Crim1 gene is predicted to encode a transmembrane protein containing six von Willebrand-like cysteine-rich repeats (CRRs) similar to those in the BMP-binding antagonist Chordin (Chrd). In this study, we verify that CRIM1 is a glycosylated, Type I transmembrane protein and demonstrate that the extracellular CRR-containing domain can also be secreted, presumably via processing at the membrane. We have previously demonstrated Crim1 expression at sites consistent with an interaction with bone morphogenetic proteins (BMPs). Here we show that CRIM1 can interact with both BMP4 and BMP7 via the CRR-containing portion of the protein and in so doing acts as an antagonist in three ways. CRIM1 binding of BMP4 and -7 occurs when these proteins are co-expressed within the Golgi compartment of the cell and leads to (i) a reduction in the production and processing of preprotein to mature BMP, (ii) tethering of pre-BMP to the cell surface, and (iii) an effective reduction in the secretion of mature BMP. Functional antagonism was verified by examining the effect of coexpression of CRIM1 and BMP4 on metanephric explant culture. The presence of CRIM1 reduced the effective BMP4 concentration of the media, thereby acting as a BMP4 antagonist. Hence, CRIM1 modulates BMP activity by affecting its processing and delivery to the cell surface
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
For zygosity diagnosis in the absence of genotypic data, or in the recruitment phase of a twin study where only single twins from same-sex pairs are being screened, or to provide a test for sample duplication leading to the false identification of a dizygotic pair as monozygotic, the appropriate analysis of respondents' answers to questions about zygosity is critical. Using data from a young adult Australian twin cohort (N = 2094 complete pairs and 519 singleton twins from same-sex pairs with complete responses to all zygosity items), we show that application of latent class analysis (LCA), fitting a 2-class model, yields results that show good concordance with traditional methods of zygosity diagnosis, but with certain important advantages. These include the ability, in many cases, to assign zygosity with specified probability on the basis of responses of a single informant (advantageous when one zygosity type is being oversampled); and the ability to quantify the probability of misassignment of zygosity, allowing prioritization of cases for genotyping as well as identification of cases of probable laboratory error. Out of 242 twins (from 121 like-sex pairs) where genotypic data were available for zygosity confirmation, only a single case was identified of incorrect zygosity assignment by the latent class algorithm. Zygosity assignment for that single case was identified by the LCA as uncertain (probability of being a monozygotic twin only 76%), and the co-twin's responses clearly identified the pair as dizygotic (probability of being dizygotic 100%). In the absence of genotypic data, or as a safeguard against sample duplication, application of LCA for zygosity assignment or confirmation is strongly recommended.
Resumo:
The nitrogen removal capacity of a suspended culture system treating mature landfill leachate was investigated. Leachate containing high ammonium levels of 300-900 mg N/L was nitrified in a bench scale sequencing batch reactor. Leachate from four different landfills was treated over a two year period for the removal of nitrogen. In this time, a highly specific nitrifying culture was attained that delivered exceptionally high rates of ammonia removal. No sludge was wasted from the system to increase the throughput and up to 13 g/L of MLSS was obtained. Settleability of the purely nitrifying biomass was excellent with SVI less than 40 mL/g, even at the high sludge concentrations. Nitrification rates up to 246 mg NI(L h) (5.91 g N/(L d)) and specific nitrification rates of 36 mg N/(gVSS h) (880 mg N/(gVSS d)) were obtained. The loading to the system at this time allowed complete nitrification of the leachate with a hydraulic retention time of only 5 hours. Following these successful treatability studies, a full-scale plant was designed and built at one of the landfills investigated.
Resumo:
Objective: To develop a 'quality use of medicines' coding system for the assessment of pharmacists' medication reviews and to apply it to an appropriate cohort. Method: A 'quality use of medicines' coding system was developed based on findings in the literature. These codes were then applied to 216 (111 intervention, 105 control) veterans' medication profiles by an independent clinical pharmacist who was supported by a clinical pharmacologist with the aim to assess the appropriateness of pharmacy interventions. The profiles were provided for veterans participating in a randomised, controlled trial in private hospitals evaluating the effect of medication review and discharge counselling. The reliability of the coding was tested by two independent clinical pharmacists in a random sample of 23 veterans from the study population. Main outcome measure: Interrater reliability was assessed by applying Cohen's kappa score on aggregated codes. Results: The coding system based on the literature consisted of 19 codes. The results from the three clinical pharmacists suggested that the original coding system had two major problems: (a) a lack of discrimination for certain recommendations e. g. adverse drug reactions, toxicity and mortality may be seen as variations in degree of a single effect and (b) certain codes e. g. essential therapy were in low prevalence. The interrater reliability for an aggregation of all codes into positive, negative and clinically non-significant codes ranged from 0.49-0.58 (good to fair). The interrater reliability increased to 0.72-0.79 (excellent) when all negative codes were excluded. Analysis of the sample of 216 profiles showed that the most prevalent recommendations from the clinical pharmacists were a positive impact in reducing adverse responses (31.9%), an improvement in good clinical pharmacy practice (25.5%) and a positive impact in reducing drug toxicity (11.1%). Most medications were assigned the clinically non-significant code (96.6%). In fact, the interventions led to a statistically significant difference in pharmacist recommendations in the categories; adverse response, toxicity and good clinical pharmacy practice measured by the quality use of medicine coding system. Conclusion: It was possible to use the quality use of medicine coding system to rate the quality and potential health impact of pharmacists' medication reviews, and the system did pick up differences between intervention and control patients. The interrater reliability for the summarised coding system was fair, but a larger sample of medication regimens is needed to assess the non-summarised quality use of medicines coding system.