700 resultados para ZATSEPIN-KUZMIN CUTOFF
Resumo:
Background Skin temperature assessment is a promising modality for early detection of diabetic foot problems, but its diagnostic value has not been studied. Our aims were to investigate the diagnostic value of different cutoff skin temperature values for detecting diabetes-related foot complications such as ulceration, infection, and Charcot foot and to determine urgency of treatment in case of diagnosed infection or a red-hot swollen foot. Materials and Methods The plantar foot surfaces of 54 patients with diabetes visiting the outpatient foot clinic were imaged with an infrared camera. Nine patients had complications requiring immediate treatment, 25 patients had complications requiring non-immediate treatment, and 20 patients had no complications requiring treatment. Average pixel temperature was calculated for six predefined spots and for the whole foot. We calculated the area under the receiver operating characteristic curve for different cutoff skin temperature values using clinical assessment as reference and defined the sensitivity and specificity for the most optimal cutoff temperature value. Mean temperature difference between feet was analyzed using the Kruskal–Wallis tests. Results The most optimal cutoff skin temperature value for detection of diabetes-related foot complications was a 2.2°C difference between contralateral spots (sensitivity, 76%; specificity, 40%). The most optimal cutoff skin temperature value for determining urgency of treatment was a 1.35°C difference between the mean temperature of the left and right foot (sensitivity, 89%; specificity, 78%). Conclusions Detection of diabetes-related foot complications based on local skin temperature assessment is hindered by low diagnostic values. Mean temperature difference between two feet may be an adequate marker for determining urgency of treatment.
Resumo:
A randomised and population-based screening design with new technologies has been applied to the organised cervical cancer screening programme in Finland. In this experiment the women invited to routine five-yearly screening are individually randomised to be screened with automation-assisted cytology, human papillomavirus (HPV) test or conventional cytology. By using the randomised design, the ultimate aim is to assess and compare the long-term outcomes of the different screening regimens. The primary aim of the current study was to evaluate, based on the material collected during the implementation phase of the Finnish randomised screening experiment, the cross-sectional performance and validity of automation-assisted cytology (Papnet system) and primary HPV DNA testing (Hybrid Capture II assay for 13 oncogenic HPV types) within service screening, in comparison to conventional cytology. The parameters of interest were test positivity rate, histological detection rate, relative sensitivity, relative specificity and positive predictive value. Also, the effect of variation in performance by screening laboratory on age-adjusted cervical cancer incidence was assessed. Based on the cross-sectional results, almost no differences were observed in the performance of conventional and automation-assisted screening. Instead, primary HPV screening found 58% (95% confidence interval 19-109%) more cervical lesions than conventional screening. However, this was mainly due to overrepresentation of mild- and moderate-grade lesions and, thus, is likely to result in overtreatment since a great deal of these lesions would never progress to invasive cancer. Primary screening with an HPV DNA test alone caused substantial loss in specificity in comparison to cytological screening. With the use of cytology triage test, the specificity of HPV screening improved close to the level of conventional cytology. The specificity of primary HPV screening was also increased by increasing the test positivity cutoff from the level recommended for clinical use, but the increase was more modest than the one gained with the use of cytology triage. The performance of the cervical cancer screening programme varied widely between the screening laboratories, but the variation in overall programme effectiveness between respective populations was more marginal from the very beginning of the organised screening activity. Thus, conclusive interpretations on the quality or success of screening should not be based on performance parameters only. In the evaluation of cervical cancer screening the outcome should be selected as closely as possible to the true measure of programme effectiveness, which is the number of invasive cervical cancers and subsequent deaths prevented in the target population. The evaluation of benefits and adverse effects of each new suggested screening technology should be performed before the technology becomes an accepted routine in the existing screening programme. At best, the evaluation is performed randomised, within the population and screening programme in question, which makes the results directly applicable to routine use.
Resumo:
This study views each protein structure as a network of noncovalent connections between amino acid side chains. Each amino acid in a protein structure is a node, and the strength of the noncovalent interactions between two amino acids is evaluated for edge determination. The protein structure graphs (PSGs) for 232 proteins have been constructed as a function of the cutoff of the amino acid interaction strength at a few carefully chosen values. Analysis of such PSGs constructed on the basis of edge weights has shown the following: 1), The PSGs exhibit a complex topological network behavior, which is dependent on the interaction cutoff chosen for PSG construction. 2), A transition is observed at a critical interaction cutoff, in all the proteins, as monitored by the size of the largest cluster (giant component) in the graph. Amazingly, this transition occurs within a narrow range of interaction cutoff for all the proteins, irrespective of the size or the fold topology. And 3), the amino acid preferences to be highly connected (hub frequency) have been evaluated as a function of the interaction cutoff. We observe that the aromatic residues along with arginine, histidine, and methionine act as strong hubs at high interaction cutoffs, whereas the hydrophobic leucine and isoleucine residues get added to these hubs at low interaction cutoffs, forming weak hubs. The hubs identified are found to play a role in bringing together different secondary structural elements in the tertiary structure of the proteins. They are also found to contribute to the additional stability of the thermophilic proteins when compared to their mesophilic counterparts and hence could be crucial for the folding and stability of the unique three-dimensional structure of proteins. Based on these results, we also predict a few residues in the thermophilic and mesophilic proteins that can be mutated to alter their thermal stability.
Resumo:
General relativity has very specific predictions for the gravitational waveforms from inspiralling compact binaries obtained using the post-Newtonian (PN) approximation. We investigate the extent to which the measurement of the PN coefficients, possible with the second generation gravitational-wave detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) and the third generation gravitational-wave detectors such as the Einstein Telescope (ET), could be used to test post-Newtonian theory and to put bounds on a subclass of parametrized-post-Einstein theories which differ from general relativity in a parametrized sense. We demonstrate this possibility by employing the best inspiralling waveform model for nonspinning compact binaries which is 3.5PN accurate in phase and 3PN in amplitude. Within the class of theories considered, Advanced LIGO can test the theory at 1.5PN and thus the leading tail term. Future observations of stellar mass black hole binaries by ET can test the consistency between the various PN coefficients in the gravitational-wave phasing over the mass range of 11-44M(circle dot). The choice of the lower frequency cutoff is important for testing post-Newtonian theory using the ET. The bias in the test arising from the assumption of nonspinning binaries is indicated.
Resumo:
In this article, an ultrasonic wave propagation in graphene sheet is studied using nonlocal elasticity theory incorporating small scale effects. The graphene sheet is modeled as an isotropic plate of one-atom thick. For this model, the nonlocal governing differential equations of motion are derived from the minimization of the total potential energy of the entire system. An ultrasonic type of wave propagation model is also derived for the graphene sheet. The nonlocal scale parameter introduces certain band gap region in in-plane and flexural wave modes where no wave propagation occurs. This is manifested in the wavenumber plots as the region where the wavenumber tends to infinite or wave speed tends to zero. The frequency at which this phenomenon occurs is called the escape frequency. The explicit expressions for cutoff frequencies and escape frequencies are derived. The escape frequencies are mainly introduced because of the nonlocal elasticity. Obviously these frequencies are function of nonlocal scaling parameter. It has also been obtained that these frequencies are independent of y-directional wavenumber. It means that for any type of nanostructure, the escape frequencies are purely a function of nonlocal scaling parameter only. It is also independent of the geometry of the structure. It has been found that the cutoff frequencies are function of nonlocal scaling parameter (e(0)a) and the y-directional wavenumber (k(y)). For a given nanostructure, nonlocal small scale coefficient can be obtained by matching the results from molecular dynamics (MD) simulations and the nonlocal elasticity calculations. At that value of the nonlocal scale coefficient, the waves will propagate in the nanostructure at that cut-off frequency. In the present paper, different values of e(o)a are used. One can get the exact e(0)a for a given graphene sheet by matching the MD simulation results of graphene with the results presented in this paper. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Questions of the small size of non-industrial private forest (NIPF) holdings in Finland are considered and factors affecting their partitioning are analyzed. This work arises out of Finnish forest policy statements in which the small average size of holdings has been seen to have a negative influence on the economics of forestry. A survey of the literature indicates that the size of holdings is an important factor determining the costs of logging and silvicultural operations, while its influence on the timber supply is slight. The empirical data are based on a sample of 314 holdings collected by interviewing forest owners in the years 1980-86. In 1990-91 the same holdings were resurveyed by means of a postal inquiry and partly by interviewing forest owners. The principal objective in compiling the data is to assist in quantifying ownership factors that influence partitioning among different kinds of NIPF holdings. Thus the mechanism of partitioning were described and a maximum likelihood logistic regression model was constructed using seven independent holding and ownership variables. One out of four holdings had undergone partitioning in conjunction with a change in ownership, one fifth among family owned holdings and nearly a half among jointly owned holdings. The results of the logistic regression model indicate, for instance, that the odds on partitioning is about three times greater for jointly owned holdings than for family owned ones. Also, the probabilities of partitioning were estimated and the impact of independent dichotomous variables on the probability of partitioning ranged between 0.02 and 0.10. The low value of the Hosmer-Lemeshow test statistic indicates a good fit of the model and the rate of correct classification was estimated to be 88 per cent with a cutoff point of 0.5. The average size of holdings undergoing ownership changes decreased from 29.9 ha to 28.7 ha over the approximate interval 1983-90. In addition, the transition probability matrix showed that the trends towards smaller size categories mostly involved in the small size categories, less than 20 ha. The results of the study can be used in considering the effects of the small size of holdings for forestry and if the purpose is to influence partitioning through forest or rural policy.
Resumo:
It is observed that general explicit guidance schemes exhibit numerical instability close to the injection point. This difficulty is normally attributed to the demand for exact injection which, in turn, calls for finite corrections to be enforced in a relatively short time. The deviations in vehicle state which need corrective maneuvers are caused by the off-nominal operating conditions. Hence, the onset of terminal instability depends on the type of off-nominal conditions encountered. The proposed separate terminal guidance scheme overcomes the above difficulty by minimizing a quadratic penalty on injection errors rather than demanding an exact injection. There is also a special requirement in the terminal phase for the faster guidance computations. The faster guidance computations facilitate a more frequent guidance update enabling an accurate terminal thrust cutoff. The objective of faster computations is realized in the terminal guidance scheme by employing realistic assumptions that are accurate enough for a short terminal trajectory. It is observed from simulations that one of the guidance parameters (P) related to the thrust steering angular rates can indicate the onset of terminal instability due to different off-nominal operating conditions. Therefore, the terminal guidance scheme can be dynamically invoked based on monitoring of deviations in the lone parameter P.
Resumo:
Maintaining quantum coherence is a crucial requirement for quantum computation; hence protecting quantum systems against their irreversible corruption due to environmental noise is an important open problem. Dynamical decoupling (DD) is an effective method for reducing decoherence with a low control overhead. It also plays an important role in quantum metrology, where, for instance, it is employed in multiparameter estimation. While a sequence of equidistant control pulses the Carr-Purcell-Meiboom-Gill (CPMG) sequence] has been ubiquitously used for decoupling, Uhrig recently proposed that a nonequidistant pulse sequence the Uhrig dynamic decoupling (UDD) sequence] may enhance DD performance, especially for systems where the spectral density of the environment has a sharp frequency cutoff. On the other hand, equidistant sequences outperform UDD for soft cutoffs. The relative advantage provided by UDD for intermediate regimes is not clear. In this paper, we analyze the relative DD performance in this regime experimentally, using solid-state nuclear magnetic resonance. Our system qubits are C-13 nuclear spins and the environment consists of a H-1 nuclear spin bath whose spectral density is close to a normal (Gaussian) distribution. We find that in the presence of such a bath, the CPMG sequence outperforms the UDD sequence. An analogy between dynamical decoupling and interference effects in optics provides an intuitive explanation as to why the CPMG sequence performs better than any nonequidistant DD sequence in the presence of this kind of environmental noise.
Resumo:
We show that the recently proposed Dirac-Born-Infeld extension of new massive gravity emerges naturally as a counterterm in four-dimensional anti-de Sitter space (AdS(4)). The resulting on-shell Euclidean action is independent of the cutoff at zero temperature. We also find that the same choice of counterterm gives the usual area law for the AdS(4) Schwarzschild black hole entropy in a cutoff-independent manner. The parameter values of the resulting counterterm action correspond to a c = 0 theory in the context of the duality between AdS(3) gravity and two-dimensional conformal field theory. We rewrite this theory in terms of the gauge field that is used to recast 3D gravity as a Chern-Simons theory.
Resumo:
We have studied the power spectral density [S(f) = gamma/f(alpha)] of universal conductance fluctuations (UCF's) in heavily doped single crystals of Si, when the scatterers themselves act as the primary source of dephasing. We observed that the scatterers, with internal dynamics like two-level-systems, produce a significant, temperature-dependent reduction in the spectral slope alpha when T less than or similar to 10 K, as compared to the bare 1/f (alphaapproximate to1) spectrum at higher temperatures. It is further shown that an upper cutoff frequency (f(m)) in the UCF spectrum is necessary in order to restrict the magnitude of conductance fluctuations, [(deltaG(phi))(2)], per phase coherent region (L-phi(3)) to [(deltaGphi)(2)](1/2) less than or similar to e(2)/h. We find that f(m) approximate to tau(D)(-1), where tau(D) = L-2/D, is the time scale of the diffusive motion of the electron along the active length (L) of the sample (D is the electron diffusivity).
Resumo:
We derive the computational cutoff rate, R-o, for coherent trellis-coded modulation (TCM) schemes on independent indentically distributed (i.i.d.) Rayleigh fading channels with (K, L) generalized selection combining (GSC) diversity, which combines the K paths with the largest instantaneous signal-to-noise ratios (SNRs) among the L available diversity paths. The cutoff rate is shown to be a simple function of the moment generating function (MGF) of the SNR at the output of the (K, L) GSC receiver. We also derive the union bound on the bit error probability of TCM schemes with (K, L) GSC in the form of a simple, finite integral. The effectiveness of this bound is verified through simulations.
Resumo:
Conventional hardware implementation techniques for FIR filters require the computation of filter coefficients in software and have them stored in memory. This approach is static in the sense that any further fine tuning of the filter requires computation of new coefficients in software. In this paper, we propose an alternate technique for implementing FIR filters in hardware. We store a considerably large number of impulse response coefficients of the ideal filter (having box type frequency response) in memory. We then do the windowing process, on these coefficients, in hardware using integer sequences as window functions. The integer sequences are also generated in hardware. This approach offers the flexibility in fine tuning the filter, like varying the transition bandwidth around a particular cutoff frequency.
Resumo:
A energy-insensitive explicit guidance design is proposed in this paper by appending newlydeveloped nonlinear model predictive static programming technique with dynamic inversion, which render a closed form solution of the necessary guidance command update. The closed form nature of the proposed optimal guidance scheme suppressed the computational difficulties, and facilitate realtime solution. The guidance law is successfully verified in a solid motor propelled long range flight vehicle, for which developing an effective guidance law is more difficult as compared to a liquid engine propelled vehicle, mainly because of the absence of thrust cutoff facility. The scheme guides the vehicle appropriately so that it completes the mission within a tight error bound assuming that the starting point of the second stage to be a deterministic point beyond the atmosphere. The simulation results demonstrate its ability to intercept the target, even with an uncertainty of greater than 10% in the burnout time
Resumo:
Combining the newly developed nonlinear model predictive static programming technique with null range direction concept, a novel explicit energy-insensitive guidance design method is presented in this paper for long range flight vehicles, which leads to a closed form solution of the necessary guidance command update. Owing to the closed form nature, it does not lead to computational difficulties and the proposed optimal guidance algorithm can be implemented online. The guidance law is verified in a solid motor propelled long range flight vehicle, for which coming up with an effective guidance law is more difficult as compared to a liquid engine propelled vehicle (mainly because of the absence of thrust cutoff facility). Assuming the starting point of the second stage to be a deterministic point beyond the atmosphere, the scheme guides the vehicle properly so that it completes the mission within a tight error bound. The simulation results demonstrate its ability to intercept the target, even with an uncertainty of greater than 10% in burnout time.
Resumo:
This letter deals with a three‐dimensional analysis of circular sectors and annular segments resulting from the partitioning of a round (cylindrical) duct for use in an active noise control system. The relevant frequency equations are derived for stationary medium and solved numerically to arrive at the cut‐on frequencies of the first few modes. The resultant table indicates among other things that azimuthal partitioning does not raise the cutoff frequency (the smallest cut‐on frequency) beyond a particular value, and that radial partitioning is counterproductive in that respect.