969 resultados para eigenfunction stochastic volatility models
Resumo:
Blood-feeding parasites, including schistosomes, hookworms, and malaria parasites, employ aspartic proteases to make initial or early cleavages in ingested host hemoglobin. To better understand the substrate affinity of these aspartic proteases, sequences were aligned with and/or three-dimensional, molecular models were constructed of the cathepsin D-like aspartic proteases of schistosomes and hookworms and of plasmepsins of Plasmodium falciparum and Plasmodium vivax, using the structure of human cathepsin D bound to the inhibitor pepstatin as the template. The catalytic subsites S5 through S4' were determined for the modeled parasite proteases. Subsequently, the crystal structure of mouse renin complexed with the nonapeptidyl inhibitor t-butyl-CO-His-Pro-Phe-His-Leu [CHOHCH2]Leu-Tyr-Tyr-Ser-NH2 (CH-66) was used to build homology models of the hemoglobin-degrading peptidases docked with a series of octapeptide substrates. The modeled octapeptides included representative sites in hemoglobin known to be cleaved by both Schistosoma japonicum cathepsin D and human cathepsin D, as well as sites cleaved by one but not the other of these enzymes. The peptidase-octapeptide substrate models revealed that differences in cleavage sites were generally attributable to the influence of a single amino acid change among the P5 to P4' residues that would either enhance or diminish the enzymatic affinity. The difference in cleavage sites appeared to be more profound than might be expected from sequence differences in the enzymes and hemoglobins. The findings support the notion that selective inhibitors of the hemoglobin-degrading peptidases of blood-feeding parasites at large could be developed as novel anti-parasitic agents.
Resumo:
New designs for force-minimized compact high-field clinical MRI magnets are described. The design method is a modified simulated annealing (SA) procedure which includes Maxwell forces in the error function to be minimized. This permits an automated force reduction in the magnet designs while controlling the overall dimensions of the system. As SA optimization requires many iterations to achieve a final design, it is important that each iteration in the procedure is rapid. We have therefore developed a rapid force calculation algorithm. Novel designs for short 3- and 4-T clinical MRI systems are presented in which force reduction has been invoked. The final designs provide large homogeneous regions and reduced stray fields in remarkable short magnets. A shielded 4-T design that is approximately 30% shorter than current designs is presented. This novel magnet generates a full 50-cm diameter homogeneous region.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.
Resumo:
Loss networks have long been used to model various types of telecommunication network, including circuit-switched networks. Such networks often use admission controls, such as trunk reservation, to optimize revenue or stabilize the behaviour of the network. Unfortunately, an exact analysis of such networks is not usually possible, and reduced-load approximations such as the Erlang Fixed Point (EFP) approximation have been widely used. The performance of these approximations is typically very good for networks without controls, under several regimes. There is evidence, however, that in networks with controls, these approximations will in general perform less well. We propose an extension to the EFP approximation that gives marked improvement for a simple ring-shaped network with trunk reservation. It is based on the idea of considering pairs of links together, thus making greater allowance for dependencies between neighbouring links than does the EFP approximation, which only considers links in isolation.
Resumo:
This is a reply to the comment by P Schlottmann and A A Zvyagin.
Resumo:
Many business-oriented software applications are subject to frequent changes in requirements. This paper shows that, ceteris paribus, increases in the volatility of system requirements decrease the reliability of software. Further, systems that exhibit high volatility during the development phase are likely to have lower reliability during their operational phase. In addition to the typically higher volatility of requirements, end-users who specify the requirements of business-oriented systems are usually less technically oriented than people who specify the requirements of compilers, radar tracking systems or medical equipment. Hence, the characteristics of software reliability problems for business-oriented systems are likely to differ significantly from those of more technically oriented systems.
Resumo:
Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.
Resumo:
This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space. We illustrate the method with reference to several models, including birth-death processes and the birth, death and catastrophe process. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
The splitting method is a simulation technique for the estimation of very small probabilities. In this technique, the sample paths are split into multiple copies, at various stages in the simulation. Of vital importance to the efficiency of the method is the Importance Function (IF). This function governs the placement of the thresholds or surfaces at which the paths are split. We derive a characterisation of the optimal IF and show that for multi-dimensional models the natural choice for the IF is usually not optimal. We also show how nearly optimal splitting surfaces can be derived or simulated using reverse time analysis. Our numerical experiments illustrate that by using the optimal IF, one can obtain a significant improvement in simulation efficiency.
Resumo:
Comparative phylogeography has proved useful for investigating biological responses to past climate change and is strongest when combined with extrinsic hypotheses derived from the fossil record or geology. However, the rarity of species with sufficient, spatially explicit fossil evidence restricts the application of this method. Here, we develop an alternative approach in which spatial models of predicted species distributions under serial paleoclimates are compared with a molecular phylogeography, in this case for a snail endemic to the rainforests of North Queensland, Australia. We also compare the phylogeography of the snail to those from several endemic vertebrates and use consilience across all of these approaches to enhance biogeographical inference for this rainforest fauna. The snail mtDNA phylogeography is consistent with predictions from paleoclimate modeling in relation to the location and size of climatic refugia through the late Pleistocene-Holocene and broad patterns of extinction and recolonization. There is general agreement between quantitative estimates of population expansion from sequence data (using likelihood and coalescent methods) vs. distributional modeling. The snail phylogeography represents a composite of both common and idiosyncratic patterns seen among vertebrates, reflecting the geographically finer scale of persistence and subdivision in the snail. In general, this multifaceted approach, combining spatially explicit paleoclimatological models and comparative phylogeography, provides a powerful approach to locating historical refugia and understanding species' responses to them.