883 resultados para Analytic hierarchy process (ahp)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Study Process Questionnaire (Biggs, 1987) has been widely used in studies investigating learning behaviours in tertiary education. Many of the studies that have used the instrument have investigated the construct validity of the SPQ using a variety of factor analytic methods and techniques in an atheoretical way. Contrary to this method, Burnett and Dart (1997) argued that the hypothesised structure of a scale should be used when assessing the construct validity of an existing instrument. This study investigated the factor structure of the SPQ using a theoretical approach and found strong support for the three approaches to learning structure of the instrument.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

User-Web interactions have emerged as an important research in the field of information science. In this study, we examine extensively the Web searching performed by general users. Our goal is to investigate the effects of users’ cognitive styles on their Web search behavior in relation to two broad components: Information Searching and Information Processing Approaches. We use questionnaires, a measure of cognitive style, Web session logs and think-aloud as the data collection instruments. Our study findings show wholistic Web users tend to adopt a top-down approach to Web searching, where the users searched for a generic topic, and then reformulate their queries to search for specific information. They tend to prefer reading to process information. Analytic users tend to prefer a bottom-up approach to information searching and they process information by scanning search result pages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a formalism for the analysis of sensitivity of nuclear magnetic resonance pulse sequences to variations of pulse sequence parameters, such as radiofrequency pulses, gradient pulses or evolution delays. The formalism enables the calculation of compact, analytic expressions for the derivatives of the density matrix and the observed signal with respect to the parameters varied. The analysis is based on two constructs computed in the course of modified density-matrix simulations: the error interrogation operators and error commutators. The approach presented is consequently named the Error Commutator Formalism (ECF). It is used to evaluate the sensitivity of the density matrix to parameter variation based on the simulations carried out for the ideal parameters, obviating the need for finite-difference calculations of signal errors. The ECF analysis therefore carries a computational cost comparable to a single density-matrix or product-operator simulation. Its application is illustrated using a number of examples from basic NMR spectroscopy. We show that the strength of the ECF is its ability to provide analytic insights into the propagation of errors through pulse sequences and the behaviour of signal errors under phase cycling. Furthermore, the approach is algorithmic and easily amenable to implementation in the form of a programming code. It is envisaged that it could be incorporated into standard NMR product-operator simulation packages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information over- lays. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment has much potential in areas of BPM; to engage, pro- vide insight, and to promote collaboration amongst analysts and stakeholders alike. This initial visualization workshop seeks to initiate the development of a high quality international forum to present and discuss research in this field. Via this workshop, we intend to create a community to unify and nurture the development of process visualization topics as a continuing research area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The publication of the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; American Psychiatric Association, 1994) introduced the notion that a life-threatening illness can be a stressor and catalyst for Posttraumatic Stress Disorder (PTSD). Since then a solid body of research has been established investigating the post-diagnosis experience of cancer. These studies have identified a number of short and long-term life changes resulting from a diagnosis of cancer and associated treatments. In this chapter, we discuss the psychosocial response to the cancer experience and the potential for cancer-related distress. Cancer can represent a life-threatening diagnosis that may be associated with aggressive treatments and result in physical and psychological changes. The potential for future trauma through the lasting effects of the disease and treatment, and the possibility of recurrence, can be a source of continued psychological distress. In addition to the documented adverse repercussions of cancer, we also outline the recent shift that has occurred in the psycho-oncology literature regarding positive life change or posttraumatic growth that is commonly reported after a diagnosis of cancer. Adopting a salutogenic framework acknowledges that the cancer experience is a dynamic psychosocial process with both negative and positive repercussions. Next, we describe the situational and individual factors that are associated with posttraumatic growth and the types of positive life change that are prevalent in this context. Finally, we discuss the implications of this research in a therapeutic context and the directions of future posttraumatic growth research with cancer survivors. This chapter will present both quantitative and qualitative research that indicates the potential for personal growth from adversity rather than just mere survival and return to pre-diagnosis functioning. It is important to emphasise however, that the presence of growth and prevalence of resilience does not negate the extremely distressing nature of a cancer diagnosis for the patient and their families and the suffering that can accompany treatment regimes. Indeed, it will be explained that for growth to occur, the experience must be one that quite literally shatters previously held schemas in order to act as a catalyst for change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research develops a new framework to be used as a tool for analysing and designing walkable communities. The literature review recognises the work of other researchers combining their findings with the theory of activity nodes and considers how a framework may be used on a more global basis. The methodology develops a set of criteria through the analysis of noted successful case studies and this is then tested against an area with very low walking rates in Brisbane, Australia. Results of the study suggest that as well as the accepted criteria of connectivity, accessibility, safety, security, and path quality further criteria in the form or planning hierarchy, activity nodes and climate mitigation could be added to allow the framework to cover a broader context. Of particular note is the development of the nodal approach, which allows simple and effective analysis of existing conditions, and may also prove effective as a tool for planning and design of walkable communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traumatic experiences can have a powerful impact on individuals and communities but the relationship between perceptions of beneficial and pathological outcomes are not known. Therefore, this meta-analysis examined both the strength and the linearity of the relationship between symptoms of posttraumatic stress disorder (PTSD) and perceptions of posttraumatic growth (PTG) as well as identifying the potential moderating roles of trauma type and age. Literature searches of all languages were conducted using the ProQuest, Wiley Interscience, ScienceDirect, Informaworld and Web of Science databases. Linear and quadratic (curvilinear) rs as well as βs were analysed. Forty-two studies (N=11, 469) that examined both PTG and symptoms of PTSD were included in meta-analytic calculations. The combined studies yielded a significant linear relationship between PTG and PTSD symptoms (r=.315, CI = 0.299, 0.331), but also a significantly stronger (as tested by Fisher’s transformation) curvilinear relationship (r=.372, CI = 0.353, 0.391). The strength and linearity of these relationships differed according to trauma type and age. The results remind those working with traumatised people that positive and negative post-trauma outcomes can co-occur. A focus only on PTSD symptoms only may limit or slow recovery and mask the potential for growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of determining optimal designs for biological process models with intractable likelihoods, with the goal of parameter inference. The Bayesian approach is to choose a design that maximises the mean of a utility, and the utility is a function of the posterior distribution. Therefore, its estimation requires likelihood evaluations. However, many problems in experimental design involve models with intractable likelihoods, that is, likelihoods that are neither analytic nor can be computed in a reasonable amount of time. We propose a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo (MCMC) algorithm of Müller et al. (2004). Indirect inference employs an auxiliary model with a tractable likelihood in conjunction with the generative model, the assumed true model of interest, which has an intractable likelihood. Our approach is to estimate a map between the parameters of the generative and auxiliary models, using simulations from the generative model. An II posterior distribution is formed to expedite utility estimation. We also present a modification to the utility that allows the Müller algorithm to sample from a substantially sharpened utility surface, with little computational effort. Unlike competing methods, the II approach can handle complex design problems for models with intractable likelihoods on a continuous design space, with possible extension to many observations. The methodology is demonstrated using two stochastic models; a simple tractable death process used to validate the approach, and a motivating stochastic model for the population evolution of macroparasites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this chapter we describe a critical fairytales unit taught to 4.5 to 5.5 year olds in a context of intensifying pressure to raise literacy achievement. The unit was infused with lessons on reinterpreted fairytales followed by process drama activities built around a sophisticated picture book, Beware of the Bears (MacDonald, 2004). The latter entailed a text analytic approach to critical literacy derived from systemic functional linguistics (Halliday, 1978; Halliday & Matthiessen, 2004). This approach provides a way of analysing how words and discourse are used to represent the world in a particular way and shape reader relations with the author in a particular field (Janks, 2010).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing techniques for automated discovery of process models from event logs gen- erally produce flat process models. Thus, they fail to exploit the notion of subprocess as well as error handling and repetition constructs provided by contemporary process modeling notations, such as the Business Process Model and Notation (BPMN). This paper presents a technique for automated discovery of hierarchical BPMN models con- taining interrupting and non-interrupting boundary events and activity markers. The technique employs functional and inclusion dependency discovery techniques in order to elicit a process-subprocess hierarchy from the event log. Given this hierarchy and the projected logs associated to each node in the hierarchy, parent process and subprocess models are then discovered using existing techniques for flat process model discovery. Finally, the resulting models and logs are heuristically analyzed in order to identify boundary events and markers. By employing approximate dependency discovery tech- niques, it is possible to filter out noise in the event log arising for example from data entry errors or missing events. A validation with one synthetic and two real-life logs shows that process models derived by the proposed technique are more accurate and less complex than those derived with flat process discovery techniques. Meanwhile, a validation on a family of synthetically generated logs shows that the technique is resilient to varying levels of noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, we demonstrated a very general route to monolithic macroporous materials prepared without the use of templates (Rajamathi et al. J. Mater. Chem. 2001, 11, 2489). The route involves finding a precursor containing two metals, A and B, whose oxides are largely immiscible. Firing of the precursor followed by suitable sintering results in a monolith from which one of the oxide phases can be chemically leached out to yield a macroporous mass of the other oxide phase. The metals A and B that we employed in the demonstration were Ni and Zn. From the NiO-ZnO monolith that was obtained by decomposing the precursor, ZnO could be leached out at high pH to yield macroporous NiO. In the present work, we show that combustion-chemical (also called self-propagating) decomposition of a mixture of Ni and Zn nitrates with urea as a fuel yields an intimate mixture of the oxides that can be sintered and leached with alkali to form a macroporous NiO monolith. The new process that we present here thereby avoids the need for a crystalline single-source precursor. A novel and unanticipated aspect of the present work is that the combination of high temperatures and rapid quenching associated with combustion synthesis results in an intimate mixture of wurtzite ZnO and the metastable rock-salt Ni1-xZnxO where x is about 0.3. Leaching this monolith with alkali gives a macroporous mass of rock-salt Ni1-xZnxO, which upon reduction in H-2/Ar forms macroporous Ni and ZnO. There are thus two stages in the process that lead to two modes of pore formation. The first is associated with leaching of ZnO by alkali. The second is associated with the reduction of porous Ni1-xZnxO to give porous Ni and ZnO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that as n changes, the characteristic polynomial of the n x n random matrix with i.i.d. complex Gaussian entries can be described recursively through a process analogous to Polya's urn scheme. As a result, we get a random analytic function in the limit, which is given by a mixture of Gaussian analytic functions. This suggests another reason why the zeros of Gaussian analytic functions and the Ginibre ensemble exhibit similar local repulsion, but different global behavior. Our approach gives new explicit formulas for the limiting analytic function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new analytic solution has been obtained to the complete Fokker-Planck equation for solar flare particle propagation including the effects of convection, energy-change, corotation, and diffusion with ĸr = constant and ĸƟ ∝ r2. It is assumed that the particles are injected impulsively at a single point in space, and that a boundary exists beyond which the particles are free to escape. Several solar flare particle events have been observed with the Caltech Solar and Galactic Cosmic Ray Experiment aboard OGO-6. Detailed comparisons of the predictions of the new solution with these observations of 1-70 MeV protons show that the model adequately describes both the rise and decay times, indicating that ĸr = constant is a better description of conditions inside 1 AU than is ĸr ∝ r. With an outer boundary at 2.7 AU, a solar wind velocity of 400 km/sec, and a radial diffusion coefficient ĸr ≈ 2-8 x 1020 cm2/sec, the model gives reasonable fits to the time-profile of 1-10 MeV protons from "classical" flare-associated events. It is not necessary to invoke a scatter-free region near the sun in order to reproduce the fast rise times observed for directly-connected events. The new solution also yields a time-evolution for the vector anisotropy which agrees well with previously reported observations.

In addition, the new solution predicts that, during the decay phase, a typical convex spectral feature initially at energy To will move to lower energies at an exponential rate given by TKINK = Toexp(-t/ƬKINK). Assuming adiabatic deceleration and a boundary at 2.7 AU, the solution yields ƬKINK ≈ 100h, which is faster than the measured ~200h time constant and slower than the adiabatic rate of ~78h at 1 AU. Two possible explanations are that the boundary is at ~5 AU or that some other energy-change process is operative.