842 resultados para consistency in indexing
Resumo:
OBJECTIVE: Theoretical and empirical analysis of items and internal consistency of the Portuguese-language version of Social Phobia and Anxiety Inventory (SPAI-Portuguese). METHODS: Social phobia experts conducted a 45-item content analysis of the SPAI-Portuguese administered to a sample of 1,014 university students. Item discrimination was evaluated by Student's t test; interitem, mean and item-to-total correlations, by Pearson coefficient; reliability was estimated by Cronbach's alpha. RESULTS: There was 100% agreement among experts concerning the 45 items. On the SPAI-Portuguese 43 items were discriminative (p < 0.05). A few inter-item correlations between both subscales were below 0.2. The mean inter-item correlations were: 0.41 on social phobia subscale; 0.32 on agoraphobia subscale and 0.32 on the SPAI-Portuguese. Item-to-total correlations were all higher then 0.3 (p < 0.001). Cronbach's alphas were: 0.95 on the SPAI-Portuguese; 0.96 on social phobia subscale; 0.85 on agoraphobia subscale. CONCLUSION: The 45-item content analysis revealed appropriateness concerning the underlying construct of the SPAI-Portuguese (social phobia, agoraphobia) with good discriminative capacity on 43 items. The mean inter-item correlations and reliability coefficients demonstrated the SPAI-Portuguese and subscales internal consistency and multidimensionality. No item was suppressed in the SPAI-Portuguese but the authors suggest that a shortened SPAI, in its different versions, could be an even more useful tool for research settings in social phobia.
Resumo:
Any inconsistent theory whose underlying logic is classical encompasses all the sentences of its own language. As it denies everything it asserts, it is useless for explaining or predicting anything. Nevertheless, paraconsistent logic has shown that it is possible to live with contradictions and still avoid the collapse of the theory. The main point of this paper is to show that even if it is formally possible to isolate the contradictions and to live with them, this cohabitation is neither desired by working scientists not desirable for the progress of science. Several cases from the recent history of physics and cosmology are analyzed.
Resumo:
A theoretical framework for the joint conservation of energy and momentum in the parameterization of subgrid-scale processes in climate models is presented. The framework couples a hydrostatic resolved (planetary scale) flow to a nonhydrostatic subgrid-scale (mesoscale) flow. The temporal and horizontal spatial scale separation between the planetary scale and mesoscale is imposed using multiple-scale asymptotics. Energy and momentum are exchanged through subgrid-scale flux convergences of heat, pressure, and momentum. The generation and dissipation of subgrid-scale energy and momentum is understood using wave-activity conservation laws that are derived by exploiting the (mesoscale) temporal and horizontal spatial homogeneities in the planetary-scale flow. The relations between these conservation laws and the planetary-scale dynamics represent generalized nonacceleration theorems. A derived relationship between the wave-activity fluxes-which represents a generalization of the second Eliassen-Palm theorem-is key to ensuring consistency between energy and momentum conservation. The framework includes a consistent formulation of heating and entropy production due to kinetic energy dissipation.
Resumo:
We compare hypothetical and observed (experimental) willingness to pay (WTP) for a gradual improvement in the environmental performance of a marketed good (an office table). First, following usual practices in marketing research, subjects’ stated WTP for the improvement is obtained. Second, the same subjects participate in a real reward experiment designed to replicate the scenario valued in the hypothetical question. Our results show that, independently of the degree of the improvement, there are no significant median differences between stated and experimental data. However, subjects reporting extreme values of WTP (low or high) exhibit a more moderate behavior in the experiment.
Resumo:
Flood forecasting increasingly relies on numerical weather prediction forecasts to achieve longer lead times. One of the key difficulties that is emerging in constructing a decision framework for these flood forecasts is what to dowhen consecutive forecasts are so different that they lead to different conclusions regarding the issuing of warnings or triggering other action. In this opinion paper we explore some of the issues surrounding such forecast inconsistency (also known as "Jumpiness", "Turning points", "Continuity" or number of "Swings"). In thsi opinion paper we define forecast inconsistency; discuss the reasons why forecasts might be inconsistent; how we should analyse inconsistency; and what we should do about it; how we should communicate it and whether it is a totally undesirable property. The property of consistency is increasingly emerging as a hot topic in many forecasting environments.
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we have quantified the consistency of word usage in written texts represented by complex networks, where words were taken as nodes, by measuring the degree of preservation of the node neighborhood. Words were considered highly consistent if the authors used them with the same neighborhood. When ranked according to the consistency of use, the words obeyed a log-normal distribution, in contrast to Zipf's law that applies to the frequency of use. Consistency correlated positively with the familiarity and frequency of use, and negatively with ambiguity and age of acquisition. An inspection of some highly consistent words confirmed that they are used in very limited semantic contexts. A comparison of consistency indices for eight authors indicated that these indices may be employed for author recognition. Indeed, as expected, authors of novels could be distinguished from those who wrote scientific texts. Our analysis demonstrated the suitability of the consistency indices, which can now be applied in other tasks, such as emotion recognition.
Resumo:
The self-consistency of a thermodynamical theory for hadronic systems based on the non-extensive statistics is investigated. We show that it is possible to obtain a self-consistent theory according to the asymptotic bootstrap principle if the mass spectrum and the energy density increase q-exponentially. A direct consequence is the existence of a limiting effective temperature for the hadronic system. We show that this result is in agreement with experiments. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
Increased variability in performance has been associated with the emergence of several neurological and psychiatric pathologies. However, whether and how consistency of neuronal activity may also be indicative of an underlying pathology is still poorly understood. Here we propose a novel method for evaluating consistency from non-invasive brain recordings. We evaluate the consistency of the cortical activity recorded with magnetoencephalography in a group of subjects diagnosed with Mild Cognitive Impairment (MCI), a condition sometimes prodromal of dementia, during the execution of a memory task. We use metrics coming from nonlinear dynamics to evaluate the consistency of cortical regions. A representation known as parenclitic networks is constructed, where atypical features are endowed with a network structure, the topological properties of which can be studied at various scales. Pathological conditions correspond to strongly heterogeneous networks, whereas typical or normative conditions are characterized by sparsely connected networks with homogeneous nodes. The analysis of this kind of networks allows identifying the extent to which consistency is affected in the MCI group and the focal points where MCI is especially severe. To the best of our knowledge, these results represent the first attempt at evaluating the consistency of brain functional activity using complex networks theory.