987 resultados para entropy measure-valued solutions


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emission line fluxes from cool stars are widely used to establish an apparent emission measure distribution, EmdApp(Te), between temperatures characteristic of the low transition region and the low corona. The true emission measure distribution, EmdTrue(Te), is determined by the energy balance and geometry adopted and, with a numerical model, can be used to predict EmdApp(Te), to guide further modelling. The scaling laws that exist between coronal parameters arise from the dimensions of the terms in the energy balance equation. Here, analytical approximations to numerical solutions for EmdTrue(Te) are presented, which show how the constants in the coronal scaling laws are determined. The apparent emission measure distributions show a minimum value at some T0 and a maximum at the mean coronal temperature Tc (although in some stars, emission from active regions can contribute). It is shown that, for the energy balance and geometry adopted, the analytical values of the emission measure and electron pressure at T0 and Tc depend on only three parameters: the stellar surface gravity and the values of T0 and Tc. The results are tested against full numerical solutions for e Eri (K2 V) and are applied to Procyon (a CMi, F5 IV/V). The analytical approximations can be used to restrict the required range of full numerical solutions, to check the assumed geometry and to show where the adopted energy balance may not be appropriate. © 2011 The Authors Monthly Notices of the Royal Astronomical Society © 2011 RAS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: The identification of pre-clinical microvascular damage in hypertension by non-invasive techniques has proved frustrating for clinicians. This proof of concept study investigated whether entropy, a novel summary measure for characterizing blood velocity waveforms, is altered in participants with hypertension and may therefore be useful in risk stratification.

Methods: Doppler ultrasound waveforms were obtained from the carotid and retrobulbar circulation in 42 participants with uncomplicated grade 1 hypertension (mean systolic/diastolic blood pressure (BP) 142/92 mmHg), and 26 healthy controls (mean systolic/diastolic BP 116/69 mmHg). Mean wavelet entropy was derived from flow-velocity data and compared with traditional haemodynamic measures of microvascular function, namely the resistive and pulsatility indices.

Results: Entropy, was significantly higher in control participants in the central retinal artery (CRA) (differential mean 0.11 (standard error 0.05 cms(-1)), CI 0.009 to 0.219, p 0.017) and ophthalmic artery (0.12 (0.05), CI 0.004 to 0.215, p 0.04). In comparison, the resistive index (0.12 (0.05), CI 0.005 to 0.226, p 0.029) and pulsatility index (0.96 (0.38), CI 0.19 to 1.72, p 0.015) showed significant differences between groups in the CRA alone. Regression analysis indicated that entropy was significantly influenced by age and systolic blood pressure (r values 0.4-0.6). None of the measures were significantly altered in the larger conduit vessel.

Conclusion: This is the first application of entropy to human blood velocity waveform analysis and shows that this new technique has the ability to discriminate health from early hypertensive disease, thereby promoting the early identification of cardiovascular disease in a young hypertensive population.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a new approach for defining similarity measures for Atanassov's intuitionistic fuzzy sets (AIFS), in which a similarity measure has two components indicating the similarity and hesitancy aspects. We justify that there are at least two facets of uncertainty of an AIFS, one of which is related to fuzziness while other is related to lack of knowledge or non-specificity. We propose a set of axioms and build families of similarity measures that avoid counterintuitive examples that are used to justify one similarity measure over another. We also investigate a relation to entropies of AIFS, and outline possible application of our method in decision making and image segmentation. © 2014 Elsevier Inc. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Heart rate complexity analysis is a powerful non-invasive means to diagnose several cardiac ailments. Non-linear tools of complexity measurement are indispensable in order to bring out the complete non-linear behavior of Physiological signals. The most popularly used non-linear tools to measure signal complexity are the entropy measures like Approximate entropy (ApEn) and Sample entropy (SampEn). But, these methods become unreliable and inaccurate at times, in particular, for short length data. Recently, a novel method of complexity measurement called Distribution Entropy (DistEn) was introduced, which showed reliable performance to capture complexity of both short term synthetic and short term physiologic data. This study aims to i) examine the competence of DistEn in discriminating Arrhythmia from Normal sinus rhythm (NSR) subjects, using RR interval time series data; ii) explore the level of consistency of DistEn with data length N; and iii) compare the performance of DistEn with ApEn and SampEn. Sixty six RR interval time series data belonging to two groups of cardiac conditions namely `Arrhythmia' and `NSR' have been used for the analysis. The data length N was varied from 50 to 1000 beats with embedding dimension m = 2 for all entropy measurements. Maximum ROC area obtained using ApEn, SampEn and DistEn were 0.83, 0.86 and 0.94 for data length 1000, 1000 and 500 beats respectively. The results show that DistEn undoubtedly exhibits a consistently high performance as a classification feature in comparison with ApEn and SampEn. Therefore, DistEn shows a promising behavior as bio marker for detecting Arrhythmia from short length RR interval data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: To derive preference-based measures from various condition-specific descriptive health-related quality of life (HRQOL) measures. A general 2-stage method is evolved: 1) an item from each domain of the HRQOL measure is selected to form a health state classification system (HSCS); 2) a sample of health states is valued and an algorithm derived for estimating the utility of all possible health states. The aim of this analysis was to determine whether confirmatory or exploratory factor analysis (CFA, EFA) should be used to derive a cancer-specific utility measure from the EORTC QLQ-C30. Methods: Data were collected with the QLQ-C30v3 from 356 patients receiving palliative radiotherapy for recurrent or metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter based on a conceptual model (the established domain structure of the QLQ-C30: physical, role, emotional, social and cognitive functioning, plus several symptoms) and clinical considerations (views of both patients and clinicians about issues relevant to HRQOL in cancer). The dimensions determined by each method were then subjected to item response theory, including Rasch analysis. Results: CFA results generally supported the proposed conceptual model, with residual correlations requiring only minor adjustments (namely, introduction of two cross-loadings) to improve model fit (increment χ2(2) = 77.78, p < .001). Although EFA revealed a structure similar to the CFA, some items had loadings that were difficult to interpret. Further assessment of dimensionality with Rasch analysis aligned the EFA dimensions more closely with the CFA dimensions. Three items exhibited floor effects (>75% observation at lowest score), 6 exhibited misfit to the Rasch model (fit residual > 2.5), none exhibited disordered item response thresholds, 4 exhibited DIF by gender or cancer site. Upon inspection of the remaining items, three were considered relatively less clinically important than the remaining nine. Conclusions: CFA appears more appropriate than EFA, given the well-established structure of the QLQ-C30 and its clinical relevance. Further, the confirmatory approach produced more interpretable results than the exploratory approach. Other aspects of the general method remain largely the same. The revised method will be applied to a large number of data sets as part of the international and interdisciplinary project to develop a multi-attribute utility instrument for cancer (MAUCa).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: To establish a model to measure bidirectional flow of water from a glucose oral rehydration solution (G-ORS) and a newly developed rice-based oral rehydration solution (R-ORS) using a dual isotope tracer technique in a rat perfusion model. To measure net water, sodium and potassium absorption from the ORS. Methods: In viva steady-state perfusion studies were carried out in normal and secreting (induced by cholera toxin) rat small intestine (n = 11 in each group). To determine bidirectional flow of water from the ORS the animals were initially labelled with tritium, and deuterium was added to the perfusion solution. Sequential perfusate and blood samples were collected after attainment of steady-state conditions and analysed for water and electrolyte content. Results: There was a significant increase in net water absorption from the R-ORS compared to the G-ORS in both the normal (P < 0.02) and secreting intestine (P < 0.05). Water efflux was significantly reduced in the R-ORS group compared to the G-ORS group in both the normal (P < 0.01) and the secreting intestine (P < 0.01). There was an increase in sodium absorption in the R-ORS group compared to the G-ORS. The G-ORS produced a significantly greater blood glucose level at 75 min compared to the R-ORS (P < 0.03) in the secreting intestine. Conclusions: This study demonstrates the improved water absorption from a rice-based ORS in both the normal and secreting intestine. Evidence that the absorption of water may be influenced by the osmolality of the ORS was also demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correlation dimension D 2 and correlation entropy K 2 are both important quantifiers in nonlinear time series analysis. However, use of D 2 has been more common compared to K 2 as a discriminating measure. One reason for this is that D 2 is a static measure and can be easily evaluated from a time series. However, in many cases, especially those involving coloured noise, K 2 is regarded as a more useful measure. Here we present an efficient algorithmic scheme to compute K 2 directly from a time series data and show that K 2 can be used as a more effective measure compared to D 2 for analysing practical time series involving coloured noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of determining optimal power spectral density models for earthquake excitation which satisfy constraints on total average power, zero crossing rate and which produce the highest response variance in a given linear system is considered. The solution to this problem is obtained using linear programming methods. The resulting solutions are shown to display a highly deterministic structure and, therefore, fail to capture the stochastic nature of the input. A modification to the definition of critical excitation is proposed which takes into account the entropy rate as a measure of uncertainty in the earthquake loads. The resulting problem is solved using calculus of variations and also within linear programming framework. Illustrative examples on specifying seismic inputs for a nuclear power plant and a tall earth dam are considered and the resulting solutions are shown to be realistic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A solid oxide galvanic cell and a gas-solid (View the MathML source) equilibration technique have been used to measure the activities of the solutes in the α-solid solutions of silver with indium and tin. The results are consistent with the information now available for the corresponding liquid alloys, the phase diagram and the heats of mixing of the solid alloy. When the results of this study are taken together with published data for the α-solid solutions in Ag + Cd system, it is found that the variation of the excess partial free energy of the solute with mole fraction can be correlated to the electron/atom ratio. The significant thennodynamic parameter that explains the Hume-Rothery findings in these alloys appears to be the rate of change of the excess partial free energy with composition near the phase boundary, and this in turn reflects the value of the solute-solute interaction energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The chemical potentials of tin in its α-solid solutions with Cu, Au and Cu + Au alloys have been measured using a gas-solid equilibration technique. The variation of the excess chemical potential of tin with its composition in the alloy is related to the solute-solute repulsive interaction, while the excess chemical potential at infinite dilution of the solute is a measure of solvent-solute interaction energies. It is shown that solute-solute interaction is primarily determined by the concentration of (s + p) electrons in the conduction band, although the interaction energies are smaller than those predicted by either the rigid band model or calculation based on Friedel oscillations in the potential function. Finally, the variation of the solvent-solute interaction with solvent composition in the ternary system can be accounted for in terms of a quasi-chemical treatment which takes into account the clustering of the solvent atoms around the solute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The activity of Cr20~ in Cr20~-A12Oa solid solution has been determined in the temperature range 800~176 from electromotive force measurements on the solid oxide galvanic cell Pt,Cr + Cr2OJY~O~-ThO2/Cr + Cr~A12-xO~,Pt The activities of Cr203 and A120~ in the solid solution show both positive and negative deviations from Raoult's law. The heat and entropy of mixing of the solid Solution obtained from the temperature dependence of the emf can be expressed as AH = XCr203XA1203 [31,700Xcrzo3 -}- 37,470XA1203] J mole -I hS = -- 1.8R [Xcr2o3 In Xcr2o3 + XA12o3 In XAaos]The entropy of mixing is 10% lower than that predicted by the Temkin model.The large positive heat of mixing in the Cr2Os-A12Oa solid solution, however, suggests that this apparent: entropy discrepancy originates with the clustering of positive ions on the cation sublattice. The asymmetric miscibility gap exhibited in the CrzOa-A12Oa system below 900~ is consistent with the thermodynamic data trends recorded at the more elevated temperatures.