888 resultados para Multiple scales methods
Resumo:
In multilevel analyses, problems may arise when using Likert-type scales at the lowest level of analysis. Specifically, increases in variance should lead to greater censoring for the groups whose true scores fall at either end of the distribution. The current study used simulation methods to examine the influence of single-item Likert-type scale usage on ICC(1), ICC(2), and group-level correlations. Results revealed substantial underestimation of ICC(1) when using Likert-type scales with common response formats (e.g., 5 points). ICC(2) and group-level correlations were also underestimated, but to a lesser extent. Finally, the magnitude of underestimation was driven in large part to an interaction between Likert-type scale usage and the amounts of within- and between-group variance. © Sage Publications.
Resumo:
The evaluation and selection of industrial projects before investment decision is customarily done using marketing, technical and financial information. Subsequently, environmental impact assessment and social impact assessment are carried out mainly to satisfy the statutory agencies. Because of stricter environment regulations in developed and developing countries, quite often impact assessment suggests alternate sites, technologies, designs, and implementation methods as mitigating measures. This causes considerable delay to complete project feasibility analysis and selection as complete analysis requires to be taken up again and again till the statutory regulatory authority approves the project. Moreover, project analysis through above process often results sub-optimal project as financial analysis may eliminate better options, as more environment friendly alternative will always be cost intensive. In this circumstance, this study proposes a decision support system, which analyses projects with respect to market, technicalities, and social and environmental impact in an integrated framework using analytic hierarchy process, a multiple-attribute decision-making technique. This not only reduces duration of project evaluation and selection, but also helps select optimal project for the organization for sustainable development. The entire methodology has been applied to a cross-country oil pipeline project in India and its effectiveness has been demonstrated. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
The rapid global loss of biodiversity has led to a proliferation of systematic conservation planning methods. In spite of their utility and mathematical sophistication, these methods only provide approximate solutions to real-world problems where there is uncertainty and temporal change. The consequences of errors in these solutions are seldom characterized or addressed. We propose a conceptual structure for exploring the consequences of input uncertainty and oversimpli?ed approximations to real-world processes for any conservation planning tool or strategy. We then present a computational framework based on this structure to quantitatively model species representation and persistence outcomes across a range of uncertainties. These include factors such as land costs, landscape structure, species composition and distribution, and temporal changes in habitat. We demonstrate the utility of the framework using several reserve selection methods including simple rules of thumb and more sophisticated tools such as Marxan and Zonation. We present new results showing how outcomes can be strongly affected by variation in problem characteristics that are seldom compared across multiple studies. These characteristics include number of species prioritized, distribution of species richness and rarity, and uncertainties in the amount and quality of habitat patches. We also demonstrate how the framework allows comparisons between conservation planning strategies and their response to error under a range of conditions. Using the approach presented here will improve conservation outcomes and resource allocation by making it easier to predict and quantify the consequences of many different uncertainties and assumptions simultaneously. Our results show that without more rigorously generalizable results, it is very dif?cult to predict the amount of error in any conservation plan. These results imply the need for standard practice to include evaluating the effects of multiple real-world complications on the behavior of any conservation planning method.
Resumo:
1. Fitting a linear regression to data provides much more information about the relationship between two variables than a simple correlation test. A goodness of fit test of the line should always be carried out. Hence, r squared estimates the strength of the relationship between Y and X, ANOVA whether a statistically significant line is present, and the ‘t’ test whether the slope of the line is significantly different from zero. 2. Always check whether the data collected fit the assumptions for regression analysis and, if not, whether a transformation of the Y and/or X variables is necessary. 3. If the regression line is to be used for prediction, it is important to determine whether the prediction involves an individual y value or a mean. Care should be taken if predictions are made close to the extremities of the data and are subject to considerable error if x falls beyond the range of the data. Multiple predictions require correction of the P values. 3. If several individual regression lines have been calculated from a number of similar sets of data, consider whether they should be combined to form a single regression line. 4. If the data exhibit a degree of curvature, then fitting a higher-order polynomial curve may provide a better fit than a straight line. In this case, a test of whether the data depart significantly from a linear regression should be carried out.
Resumo:
PCA/FA is a method of analyzing complex data sets in which there are no clearly defined X or Y variables. It has multiple uses including the study of the pattern of variation between individual entities such as patients with particular disorders and the detailed study of descriptive variables. In most applications, variables are related to a smaller number of ‘factors’ or PCs that account for the maximum variance in the data and hence, may explain important trends among the variables. An increasingly important application of the method is in the ‘validation’ of questionnaires that attempt to relate subjective aspects of a patients experience with more objective measures of vision.
Resumo:
This thesis presents an investigation into the application of methods of uncertain reasoning to the biological classification of river water quality. Existing biological methods for reporting river water quality are critically evaluated, and the adoption of a discrete biological classification scheme advocated. Reasoning methods for managing uncertainty are explained, in which the Bayesian and Dempster-Shafer calculi are cited as primary numerical schemes. Elicitation of qualitative knowledge on benthic invertebrates is described. The specificity of benthic response to changes in water quality leads to the adoption of a sensor model of data interpretation, in which a reference set of taxa provide probabilistic support for the biological classes. The significance of sensor states, including that of absence, is shown. Novel techniques of directly eliciting the required uncertainty measures are presented. Bayesian and Dempster-Shafer calculi were used to combine the evidence provided by the sensors. The performance of these automatic classifiers was compared with the expert's own discrete classification of sampled sites. Variations of sensor data weighting, combination order and belief representation were examined for their effect on classification performance. The behaviour of the calculi under evidential conflict and alternative combination rules was investigated. Small variations in evidential weight and the inclusion of evidence from sensors absent from a sample improved classification performance of Bayesian belief and support for singleton hypotheses. For simple support, inclusion of absent evidence decreased classification rate. The performance of Dempster-Shafer classification using consonant belief functions was comparable to Bayesian and singleton belief. Recommendations are made for further work in biological classification using uncertain reasoning methods, including the combination of multiple-expert opinion, the use of Bayesian networks, and the integration of classification software within a decision support system for water quality assessment.
Resumo:
This thesis set out to develop an objective analysis programme that correlates with subjective grades but has improved sensitivity and reliability in its measures so that the possibility of early detection and reliable monitoring of changes in anterior ocular surfaces (bulbar hyperaemia, palpebral redness, palpebral roughness and corneal straining) could be increased. The sensitivity of the program was 20x greater than subjective grading by optometrists. The reliability was found to be optimal (r=1.0) with subjective grading up to 144x more variable (r=0.08). Objective measures were used to create formulae for an overall ‘objective-grade’ (per surface) equivalent to those displayed by the CCLRU or Efron scales. The correlation between the formulated objective verses subjective grades was high, with adjusted r2 up to 0.96. Determination of baseline levels of objective grade were investigated over four age groups (5-85years n= 120) so that in practice a comparison against the ‘normal limits’ could be made. Differences for bulbar hyperaemia were found between the age groups (p<0.001), and also for palpebral redness and roughness (p<0.001). The objective formulae were then applied to the investigation of diurnal variation in order to account for any change that may affect the baseline. Increases in bulbar hyperaemia and palpebral redness were found between examinations in the morning and evening. Correlation factors were recommended. The program was then applied to clinical situations in the form of a contact lens trial and an investigation into iritis and keratoconus where it successfully recognised various surface changes. This programme could become a valuable tool, greatly improving the chances of early detection of anterior ocular abnormalities, and facilitating reliable monitoring of disease progression in clinical as well as research environments.
Resumo:
In previous statnotes, the application of correlation and regression methods to the analysis of two variables (X,Y) was described. These methods can be used to determine whether there is a linear relationship between the two variables, whether the relationship is positive or negative, to test the degree of significance of the linear relationship, and to obtain an equation relating Y to X. This Statnote extends the methods of linear correlation and regression to situations where there are two or more X variables, i.e., 'multiple linear regression’.
Resumo:
Objective: To investigate the dynamics of communication within the primary somatosensory neuronal network. Methods: Multichannel EEG responses evoked by median nerve stimulation were recorded from six healthy participants. We investigated the directional connectivity of the evoked responses by assessing the Partial Directed Coherence (PDC) among five neuronal nodes (brainstem, thalamus and three in the primary sensorimotor cortex), which had been identified by using the Functional Source Separation (FSS) algorithm. We analyzed directional connectivity separately in the low (1-200. Hz, LF) and high (450-750. Hz, HF) frequency ranges. Results: LF forward connectivity showed peaks at 16, 20, 30 and 50. ms post-stimulus. An estimate of the strength of connectivity was modulated by feedback involving cortical and subcortical nodes. In HF, forward connectivity showed peaks at 20, 30 and 50. ms, with no apparent feedback-related strength changes. Conclusions: In this first non-invasive study in humans, we documented directional connectivity across subcortical and cortical somatosensory pathway, discriminating transmission properties within LF and HF ranges. Significance: The combined use of FSS and PDC in a simple protocol such as median nerve stimulation sheds light on how high and low frequency components of the somatosensory evoked response are functionally interrelated in sustaining somatosensory perception in healthy individuals. Thus, these components may potentially be explored as biomarkers of pathological conditions. © 2012 International Federation of Clinical Neurophysiology.
Resumo:
Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.
Resumo:
Purpose – This paper aims to focus on developing critical understanding in human resource management (HRM) students in Aston Business School, UK. The paper reveals that innovative teaching methods encourage deep approaches to study, an indicator of students reaching their own understanding of material and ideas. This improves student employability and satisfies employer need. Design/methodology/approach – Student response to two second year business modules, matched for high student approval rating, was collected through focus group discussion. One module was taught using EBL and the story method, whilst the other used traditional teaching methods. Transcripts were analysed and compared using the structure of the ASSIST measure. Findings – Critical understanding and transformative learning can be developed through the innovative teaching methods of enquiry-based learning (EBL) and the story method. Research limitations/implications – The limitation is that this is a single case study comparing and contrasting two business modules. The implication is that the study should be replicated and developed in different learning settings, so that there are multiple data sets to confirm the research finding. Practical implications – Future curriculum development, especially in terms of HE, still needs to encourage students and lecturers to understand more about the nature of knowledge and how to learn. The application of EBL and the story method is described in a module case study – “Strategy for Future Leaders”. Originality/value – This is a systematic and comparative study to improve understanding of how students and lecturers learn and of the context in which the learning takes place.
Resumo:
Gene expression is frequently regulated by multiple transcription factors (TFs). Thermostatistical methods allow for a quantitative description of interactions between TFs, RNA polymerase and DNA, and their impact on the transcription rates. We illustrate three different scales of the thermostatistical approach: the microscale of TF molecules, the mesoscale of promoter energy levels and the macroscale of transcriptionally active and inactive cells in a cell population. We demonstrate versatility of combinatorial transcriptional activation by exemplifying logic functions, such as AND and OR gates. We discuss a metric for cell-to-cell transcriptional activation variability known as Fermi entropy. Suitability of thermostatistical modeling is illustrated by describing the experimental data on transcriptional induction of NF?B and the c-Fos protein.
Resumo:
This paper describes the knowledge elicitation and knowledge representation aspects of a system being developed to help with the design and maintenance of relational data bases. The size algorithmic components. In addition, the domain contains multiple experts, but any given expert's knowledge of this large domain is only partial. The paper discusses the methods and techniques used for knowledge elicitation, which was based on a "broad and shallow" approach at first, moving to a "narrow and deep" one later, and describes the models used for knowledge representation, which were based on a layered "generic and variants" approach. © 1995.
Resumo:
With the features of low-power and flexible networking capabilities IEEE 802.15.4 has been widely regarded as one strong candidate of communication technologies for wireless sensor networks (WSNs). It is expected that with an increasing number of deployments of 802.15.4 based WSNs, multiple WSNs could coexist with full or partial overlap in residential or enterprise areas. As WSNs are usually deployed without coordination, the communication could meet significant degradation with the 802.15.4 channel access scheme, which has a large impact on system performance. In this thesis we are motivated to investigate the effectiveness of 802.15.4 networks supporting WSN applications with various environments, especially when hidden terminals are presented due to the uncoordinated coexistence problem. Both analytical models and system level simulators are developed to analyse the performance of the random access scheme specified by IEEE 802.15.4 medium access control (MAC) standard for several network scenarios. The first part of the thesis investigates the effectiveness of single 802.15.4 network supporting WSN applications. A Markov chain based analytic model is applied to model the MAC behaviour of IEEE 802.15.4 standard and a discrete event simulator is also developed to analyse the performance and verify the proposed analytical model. It is observed that 802.15.4 networks could sufficiently support most WSN applications with its various functionalities. After the investigation of single network, the uncoordinated coexistence problem of multiple 802.15.4 networks deployed with communication range fully or partially overlapped are investigated in the next part of the thesis. Both nonsleep and sleep modes are investigated with different channel conditions by analytic and simulation methods to obtain the comprehensive performance evaluation. It is found that the uncoordinated coexistence problem can significantly degrade the performance of 802.15.4 networks, which is unlikely to satisfy the QoS requirements for many WSN applications. The proposed analytic model is validated by simulations which could be used to obtain the optimal parameter setting before WSNs deployments to eliminate the interference risks.