974 resultados para Mean linear intercept
Resumo:
Recent X-ray observations have revealed that early-type galaxies (which usually produce extended double radio sources) generally have hot gaseous haloes extending up to approx102kpc1,2. Moreover, much of the cosmic X-ray background radiation is probably due to a hotter, but extremely tenuous, intergalactic medium (IGM)3. We have presented4–7 an analytical model for the propagation of relativistic beams from galactic nuclei, in which the beams' crossing of the pressure-matched interface between the IGM and the gaseous halo, plays an important role. The hotspots at the ends of the beams fade quickly when their advance becomes subsonic with respect to the IGM. This model has successfully predicted (for typical double radio sources) the observed8 current mean linear-size (approx2Dsime350 kpc)4,5, the observed8–11 decrease in linear-size with cosmological redshift4–6 and the slope of the linear-size versus radio luminosity10,12–14 relation6. We have also been able to predict the redshift-dependence of observed numbers and radio luminosities of giant radio galaxies7,15. Here, we extend this model to include the propagation of somewhat weaker beams. We show that the observed flattening of the local radio luminosity function (LRLF)16–20 for radio luminosity Papproximately 1024 W Hz-1 at 1 GHz can be explained without invoking ad hoc a corresponding break in the beam power function Phi(Lb), because the heads of the beams with Lb < 1025 W Hz-1 are decelerated to sonic velocity within the halo itself, which leads to a rapid decay of radio luminosity and a reduced contribution of these intrinsically weaker sources to the observed LRLF.
Resumo:
We study dynamical properties of quantum entanglement in the Dicke model with and without the rotating-wave approximation. Specifically, we investigate the maximal entanglement and mean entanglement which reflect the underlying chaos in the system, and a good classical-quantum correspondence is found. We also show that the maximal linear entropy can be more sensitive to chaos than the mean linear entropy.
Resumo:
Na unfolding method of linear intercept distributions and secction área distribution was implemented for structures with spherical grains. Although the unfolding routine depends on the grain shape, structures with spheroidal grains can also be treated by this routine. Grains of non-spheroidal shape can be treated only as approximation. A software was developed with two parts. The first part calculates the probability matrix. The second part uses this matrix and minimizes the chi-square. The results are presented with any number of size classes as required. The probability matrix was determined by means of the linear intercept and section area distributions created by computer simulation. Using curve fittings the probability matrix for spheres of any sizes could be determined. Two kinds of tests were carried out to prove the efficiency of the Technique. The theoretical tests represent ideal cases. The software was able to exactly find the proposed grain size distribution. In the second test, a structure was simulated in computer and images of its slices were used to produce the corresponding linear intercept the section area distributions. These distributions were then unfolded. This test simulates better reality. The results show deviations from the real size distribution. This deviations are caused by statistic fluctuation. The unfolding of the linear intercept distribution works perfectly, but the unfolding of section area distribution does not work due to a failure in the chi-square minimization. The minimization method uses a matrix inversion routine. The matrix generated by this procedure cannot be inverted. Other minimization method must be used
Resumo:
The mature dentinoenamel junction (DEJ) is viewed by some investigators and the current authors, not as a fossilized, sharp transition between enamel and dentin, but as a relatively broad structural transition zone including the mantle dentin and the inner aprismatic enamel. In this study, the DEJ structure in bovine incisors was studied with synchrotron microComputed Tomography (microCT) using small cubes cut parallel to the tooth surface. The reconstructions revealed a zone of highly variable punctate contrast between bulk dentin and enamel; the mean linear attenuation coefficients and their standard deviations demonstrated that this zone averaged less mineral than dentin or enamel but had more highly variable structure than either. The region with the punctuate contrast is, therefore, the mantle dentin. The thickness of the mantle dentin seen in a typical data set was about 30 mu m, and the mantle dentin-enamel interface deviated +/- 15 mu m from the average plane over a distance of 520 mu m. In the highest resolution data (similar to 1.5 mu m isotropic voxels, volume elements), tubules in the dentin could be discerned in the vicinity of the DEJ. Contrast sensitivity was high enough to detect differences in mineral content between near-surface and near-DEJ volumes of the enamel. Reconstructions before and after two cubes were compressed to failure revealed cracks formed only in the enamel and did not propagate across the mantle dentin, regardless of whether loading was parallel to or perpendicular to the DEJ. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Mineralogical analysis is often used to assess the liberation properties of particles. A direct method of estimating liberation is to actually break particles and then directly obtain liberation information from applying mineralogical analysis to each size-class of the product. Another technique is to artificially apply random breakage to the feed particle sections to estimate the resultant distribution of product particle sections. This technique provides a useful alternative estimation method. Because this technique is applied to particle sections, the actual liberation properties for particles can only be estimated by applying stereological correction. A recent stereological technique has been developed that allows the discrepancy between the linear intercept composition distribution and the particle section composition distribution to be used as guide for estimating the particle composition distribution. The paper will show results validating this new technique using numerical simulation. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Near-surface air temperature is an important determinant of the surface energy balance of glaciers and is often represented by a constant linear temperature gradients (TGs) in models. Spatiotemporal variability in 2 m air temperature was measured across the debris-covered Miage Glacier, Italy, over an 89 d period during the 2014 ablation season using a network of 19 stations. Air temperature was found to be strongly dependent upon elevation for most stations, even under varying meteorological conditions and at different times of day, and its spatial variability was well explained by a locally derived mean linear TG (MG–TG) of −0.0088°C m−1. However, local temperature depressions occurred over areas of very thin or patchy debris cover. The MG–TG, together with other air TGs, extrapolated from both on- and off-glacier sites, were applied in a distributed energy-balance model. Compared with piecewise air temperature extrapolation from all on-glacier stations, modelled ablation, using the MG–TG, increased by <1%, increasing to >4% using the environmental ‘lapse rate’. Ice melt under thick debris was relatively insensitive to air temperature, while the effects of different temperature extrapolation methods were strongest at high elevation sites of thin and patchy debris cover.
Resumo:
Na unfolding method of linear intercept distributions and secction área distribution was implemented for structures with spherical grains. Although the unfolding routine depends on the grain shape, structures with spheroidal grains can also be treated by this routine. Grains of non-spheroidal shape can be treated only as approximation. A software was developed with two parts. The first part calculates the probability matrix. The second part uses this matrix and minimizes the chi-square. The results are presented with any number of size classes as required. The probability matrix was determined by means of the linear intercept and section area distributions created by computer simulation. Using curve fittings the probability matrix for spheres of any sizes could be determined. Two kinds of tests were carried out to prove the efficiency of the Technique. The theoretical tests represent ideal cases. The software was able to exactly find the proposed grain size distribution. In the second test, a structure was simulated in computer and images of its slices were used to produce the corresponding linear intercept the section area distributions. These distributions were then unfolded. This test simulates better reality. The results show deviations from the real size distribution. This deviations are caused by statistic fluctuation. The unfolding of the linear intercept distribution works perfectly, but the unfolding of section area distribution does not work due to a failure in the chi-square minimization. The minimization method uses a matrix inversion routine. The matrix generated by this procedure cannot be inverted. Other minimization method must be used
Resumo:
The paper deals with a linearization technique in non-linear oscillations for systems which are governed by second-order non-linear ordinary differential equations. The method is based on approximation of the non-linear function by a linear function such that the error is least in the weighted mean square sense. The method has been applied to cubic, sine, hyperbolic sine, and odd polynomial types of non-linearities and the results obtained are more accurate than those given by existing linearization methods.
Resumo:
IEECAS SKLLQG
Resumo:
We compare a number of models of post War US output growth in terms of the degree and pattern of non-linearity they impart to the conditional mean, where we condition on either the previous period's growth rate, or the previous two periods' growth rates. The conditional means are estimated non-parametrically using a nearest-neighbour technique on data simulated from the models. In this way, we condense the complex, dynamic, responses that may be present in to graphical displays of the implied conditional mean.
Resumo:
In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
Background There has been increasing interest in assessing the impacts of temperature on mortality. However, few studies have used a case–crossover design to examine non-linear and distributed lag effects of temperature on mortality. Additionally, little evidence is available on the temperature-mortality relationship in China, or what temperature measure is the best predictor of mortality. Objectives To use a distributed lag non-linear model (DLNM) as a part of case–crossover design. To examine the non-linear and distributed lag effects of temperature on mortality in Tianjin, China. To explore which temperature measure is the best predictor of mortality; Methods: The DLNM was applied to a case¬−crossover design to assess the non-linear and delayed effects of temperatures (maximum, mean and minimum) on deaths (non-accidental, cardiopulmonary, cardiovascular and respiratory). Results A U-shaped relationship was consistently found between temperature and mortality. Cold effects (significantly increased mortality associated with low temperatures) were delayed by 3 days, and persisted for 10 days. Hot effects (significantly increased mortality associated with high temperatures) were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. Conclusions In Tianjin, extreme cold and hot temperatures increased the risk of mortality. Results suggest that the effects of cold last longer than the effects of heat. It is possible to combine the case−crossover design with DLNMs. This allows the case−crossover design to flexibly estimate the non-linear and delayed effects of temperature (or air pollution) whilst controlling for season.
Resumo:
Background: Catheter ablation for atrial fibrillation (AF) is more efficacious than antiarrhythmic therapy. Post ablation recurrences reduce ablation effectiveness and are contributed by lesion discontinuity in the fibrotic linear ablation lesions. The anti-fibrotic role of statins in reducing AF is being assessed in current trials. By reducing the chronic pathological fibrosis that occurs in AF they may reduce AF. However if statins also have an effect on the acute therapeutic fibrosis of an ablation, this could exacerbate lesion discontinuity and AF recurrence. We tested the hypothesis that statins attenuate ablation lesion continuity in a recognised pig atrial linear ablation model. Aims: To assess whether Atorvastatin diminishes the bi-directional conduction block produced by a linear atrial ablation lesion. Methods: Sixteen pigs were randomised to statin (n=8) or placebo (n=8) with drug pre-treatment for 3 days and a further 4 weeks. At initial electrophysiological study (EPS1) 3D right atrium (RA) mapping and a vertical ablation linear lesion in the posterior RA with bidirectional conduction block were completed (Gepstein Circ 1999). Follow-up electrophysiological assessment (EPS2) at 28 days assessed bidirectional conduction block maintenance. Results: Data of 15/16 (statin=7) pigs were analysed. Mean lesion length was 3.7 ± 0.8cm with a mean of 17.9 ± 5.7 lesion applications. Bi-directional conduction block was confirmed in 15/15 pigs (100%) at EPS1 and EPS2. Conclusions: Atorvastatin did not affect ablation lesion continuity in this pig atrial linear ablation model. If patients are on long-term statins for AF reduction, periablation cessation is probably not necessary.
Resumo:
In 1991, McNabb introduced the concept of mean action time (MAT) as a finite measure of the time required for a diffusive process to effectively reach steady state. Although this concept was initially adopted by others within the Australian and New Zealand applied mathematics community, it appears to have had little use outside this region until very recently, when in 2010 Berezhkovskii and coworkers rediscovered the concept of MAT in their study of morphogen gradient formation. All previous work in this area has been limited to studying single–species differential equations, such as the linear advection–diffusion–reaction equation. Here we generalise the concept of MAT by showing how the theory can be applied to coupled linear processes. We begin by studying coupled ordinary differential equations and extend our approach to coupled partial differential equations. Our new results have broad applications including the analysis of models describing coupled chemical decay and cell differentiation processes, amongst others.