903 resultados para single test electron model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is part of an extensive work about the technological development, experimental analysis and numerical modeling of steel fibre reinforced concrete pipes. The first part ("Steel fibre reinforced concrete pipes. Part 1: technological analysis of the mechanical behavior") dealt with the technological development of the experimental campaign, the test procedure and the discussion of the structural behavior obtained for each of the dosages of fibre used. This second part deals with the aspects of numerical modeling. In this respect, a numerical model called MAP, which simulates the behavior of fibre reinforced concrete pipes with medium-low range diameters, is introduced. The bases of the numerical model are also mentioned. Subsequently, the experimental results are contrasted with those produced by the numerical model, obtaining excellent correlations. It was possible to conclude that the numerical model is a useful tool for the design of this type of pipes, which represents an important step forward to establish the structural fibres as reinforcement for concrete pipes. Finally, the design for the optimal amount of fibres for a pipe with a diameter of 400 mm is presented as an illustrating example with strategic interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wave breaking is an important coastal process, influencing hydro-morphodynamic processes such as turbulence generation and wave energy dissipation, run-up on the beach and overtopping of coastal defence structures. During breaking, waves are complex mixtures of air and water (“white water”) whose properties affect velocity and pressure fields in the vicinity of the free surface and, depending on the breaker characteristics, different mechanisms for air entrainment are usually observed. Several laboratory experiments have been performed to investigate the role of air bubbles in the wave breaking process (Chanson & Cummings, 1994, among others) and in wave loading on vertical wall (Oumeraci et al., 2001; Peregrine et al., 2006, among others), showing that the air phase is not negligible since the turbulent energy dissipation involves air-water mixture. The recent advancement of numerical models has given valuable insights in the knowledge of wave transformation and interaction with coastal structures. Among these models, some solve the RANS equations coupled with a free-surface tracking algorithm and describe velocity, pressure, turbulence and vorticity fields (Lara et al. 2006 a-b, Clementi et al., 2007). The single-phase numerical model, in which the constitutive equations are solved only for the liquid phase, neglects effects induced by air movement and trapped air bubbles in water. Numerical approximations at the free surface may induce errors in predicting breaking point and wave height and moreover, entrapped air bubbles and water splash in air are not properly represented. The aim of the present thesis is to develop a new two-phase model called COBRAS2 (stands for Cornell Breaking waves And Structures 2 phases), that is the enhancement of the single-phase code COBRAS0, originally developed at Cornell University (Lin & Liu, 1998). In the first part of the work, both fluids are considered as incompressible, while the second part will treat air compressibility modelling. The mathematical formulation and the numerical resolution of the governing equations of COBRAS2 are derived and some model-experiment comparisons are shown. In particular, validation tests are performed in order to prove model stability and accuracy. The simulation of the rising of a large air bubble in an otherwise quiescent water pool reveals the model capability to reproduce the process physics in a realistic way. Analytical solutions for stationary and internal waves are compared with corresponding numerical results, in order to test processes involving wide range of density difference. Waves induced by dam-break in different scenarios (on dry and wet beds, as well as on a ramp) are studied, focusing on the role of air as the medium in which the water wave propagates and on the numerical representation of bubble dynamics. Simulations of solitary and regular waves, characterized by both spilling and plunging breakers, are analyzed with comparisons with experimental data and other numerical model in order to investigate air influence on wave breaking mechanisms and underline model capability and accuracy. Finally, modelling of air compressibility is included in the new developed model and is validated, revealing an accurate reproduction of processes. Some preliminary tests on wave impact on vertical walls are performed: since air flow modelling allows to have a more realistic reproduction of breaking wave propagation, the dependence of wave breaker shapes and aeration characteristics on impact pressure values is studied and, on the basis of a qualitative comparison with experimental observations, the numerical simulations achieve good results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). METHODS As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. RESULTS The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V95% increased from 90% to 96% and V107% decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. CONCLUSIONS MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter explains a functional integral approach about impurity in the Tomonaga–Luttinger model. The Tomonaga–Luttinger model of one-dimensional (1D) strongly correlates electrons gives a striking example of non-Fermi-liquid behavior. For simplicity, the chapter considers only a single-mode Tomonaga–Luttinger model, with one species of right- and left-moving electrons, thus, omitting spin indices and considering eventually the simplest linearized model of a single-valley parabolic electron band. The standard operator bosonization is one of the most elegant methods developed in theoretical physics. The main advantage of the bosonization, either in standard or functional form, is that including the quadric electronelectron interaction does not substantially change the free action. The chapter demonstrates the way to develop the formalism of bosonization based on the functional integral representation of observable quantities within the Keldysh formalism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible multiplicity-adjusted p-values associated with the proposed maximum test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been suggested that the Internet is the most significant driver of international trade in recent years to the extent that the term =internetalisation‘ has been coined (Bell, Deans, Ibbotson & Sinkovics, 2001; Buttriss & Wilkinson, 2003). This term is used to describe the Internet‘s affect on the internationalisation process of the firm. Consequently, researchers have argued that the internationalisation process of the firm has altered due to the Internet, hence is in need of further investigation. However, as there is limited research and understanding, ambiguity remains in how the Internet has influenced international market growth. Thus, the purpose of this study was to explore how the Internet influences firms‘ internationalisation process, specifically, international market growth. To this end, Internet marketing and international market growth theories are used to illuminate this ambiguity in the body of knowledge. Thus, the research problem =How and why does the Internet influence international market growth of the firm’ is justified for investigation. To explore the research question a two-stage approach is used. Firstly, twelve case studies were used to evaluate key concepts, generate hypotheses and to develop a model of Internetalisation for testing. The participants held key positions within their firm, so that rich data could be drawn from international market growth decision makers. Secondly, a quantitative confirmation process analysed the identified themes or constructs, using two hundred and twenty four valid responses. Constructs were evaluated through an exploratory factor analysis, confirmatory factor analysis and structural equation modelling process. Structural equation modelling was used to test the model of =internetalisation‘ to examine the interrelationships between the internationalisation process components: information availability, information usage, interaction communication, international mindset, business relationship usage, psychic distance, the Internet intensity of the firm and international market growth. This study found that the Internet intensity of the firm mediates information availability, information usage, international mindset, and business relationships when firms grow in international markets. Therefore, these results provide empirical evidence that the Internet has a positive influence on international information, knowledge, entrepreneurship and networks and these in turn influence international market growth. The theoretical contributions are three fold. Firstly, the study identifies a holistic model of the impact the Internet has had on the outward internationalisation of the firm. This contribution extends the body of knowledge pertaining to Internet international marketing by mapping and confirming interrelationships between the Internet, internationalisation and growth concepts. Secondly, the study highlights the broad scope and accelerated rate of international market growth of firms. Evidence that the Internet influences the traditional and virtual networks for the pursuit of international market growth extends the current understanding. Thirdly, this study confirms that international information, knowledge, entrepreneurship and network concepts are valid in a single model. Thus, these three contributions identify constructs, measure constructs in a multi-item capacity, map interrelationships and confirm single holistic model of ‗internetalisation‘. The main practical contribution is that the findings identified information, knowledge and entrepreneurial opportunities for firms wishing to maximise international market growth. To capitalise on these opportunities suggestions are offered to assist firms to develop greater Internet intensity and internationalisation capabilities. From a policy perspective, educational institutions and government bodies need to promote more applied programs for Internet international marketing. The study provides future researchers with a platform of identified constructs and interrelationships related to internetalisation, with which to investigate. However, a single study has limitations of generalisability; thus, future research should replicate this study. Such replication or cross validation will assist in the verification of scales used in this research and enhance the validity of causal predications. Furthermore, this study was undertaken in the Australian outward-bound context. Research in other nations, as well as research into inbound internationalisation would be fruitful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this study is to examine and extend Noer’s theoretical model of the new employment relationship. Design/methodology/approach – Case study methodology is used to scrutinise the model. The results of a literature-based survey on the elements underpinning the five values in the model are analysed from dual perspectives of individual and organization using a multi-source assessment instrument. A schema is developed to guide and inform a series of focus group discussions from an analysis of the survey data. Using content analysis, the transcripts from the focus group discussions are evaluated using the model’s values and their elements. The transcripts are also reviewed for implicit themes. The case studied is Flight Centre Limited, an Australian-based international retail travel company. Findings – Using this approach, some elements of the five values in Noer’s model are identified as characteristic of the company’s psychological contract. Specifically, to some extent, the model’s values of flexible deployment, customer focus, performance focus, project-based work, and human spirit and work can be applied in this case. A further analysis of the transcripts validates three additional values in the psychological contract literature: commitment; learning and development; and open information. As a result of the findings, Noer’s model is extended to eight values. Research limitations/implications – The study offers a research-based model of the new employment relationship. Since generalisations from the case study findings cannot be applied directly to other settings, the opportunity to test this model in a variety of contexts is open to other researchers. Originality/value – In practice, the methodology used is a unique process for benchmarking the psychological contract. The process may be applied in other business settings. By doing so, organization development professionals have a consulting framework for comparing an organization’s dominant psychological contract with the extended model presented here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Trauma resulting from traffic crashes poses a significant problem in highly motorised countries. Over a million people worldwide are killed annually and 50 million are critically injured as a result of traffic collisions. In Australia, road crashes cost an average of $17 billion annually in personal loss of income and quality of life, organisational losses in productivity and workplace quality, and health care costs. Driver aggression has been identified as a key factor contributing to crashes, and many motorists report experiencing mild forms of aggression (e.g., rude gestures, horn honking). However despite this concern, driver aggression has received relatively little attention in empirical research, and existing research has been hampered by a number of methodological and conceptual shortcomings. Specifically, there has been substantial disagreement regarding what constitutes aggressive driving and a failure to examine both the situational factors and the emotional and cognitive processes underlying driver aggression. To enhance current understanding of aggressive driving, a model of driver aggression that highlights the cognitive and emotional processes at play in aggressive driving incidents is proposed. Aims: The research aims to improve current understanding of the complex nature of driver aggression by testing and refining a model of aggressive driving that incorporates the person-related and situational factors and the cognitive and emotional appraisal processes fundamental to driver aggression. In doing so, the research will assist to provide a clear definition of what constitutes aggressive driving, assist to identify on-road incidents that trigger driver aggression, and identify the emotional and cognitive appraisal processes that underlie driver aggression. Methods: The research involves three studies. Firstly, to contextualise the model and explore the cognitive and emotional aspects of driver aggression, a diary-based study using self-reports of aggressive driving events will be conducted with a general population of drivers. This data will be supplemented by in-depth follow-up interviews with a sub-sample of participants. Secondly, to test generalisability of the model, a large sample of drivers will be asked to respond to video-based scenarios depicting driving contexts derived from incidents identified in Study 1 as inciting aggression. Finally, to further operationalise and test the model an advanced driving simulator will be used with sample of drivers. These drivers will be exposed to various driving scenarios that would be expected to trigger negative emotional responses. Results: Work on the project has commenced and progress on the first study will be reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIMS: To test a model that delineates advanced practice nursing from the practice profile of other nursing roles and titles. BACKGROUND: There is extensive literature on advanced practice reporting the importance of this level of nursing to contemporary health service and patient outcomes. Literature also reports confusion and ambiguity associated with advanced practice nursing. Several countries have regulation and delineation for the nurse practitioner, but there is less clarity in definition and service focus of other advanced practice nursing roles. DESIGN: A statewide survey. METHODS: Using the modified Strong Model of Advanced Practice Role Delineation tool, a survey was conducted in 2009 with a random sample of registered nurses/midwives from government facilities in Queensland, Australia. Analysis of variance compared total and subscale scores across groups according to grade. Linear, stepwise multiple regression analysis examined factors influencing advanced practice nursing activities across all domains. RESULTS: There were important differences according to grade in mean scores for total activities in all domains of advanced practice nursing. Nurses working in advanced practice roles (excluding nurse practitioners) performed more activities across most advanced practice domains. Regression analysis indicated that working in clinical advanced practice nursing roles with higher levels of education were strong predictors of advanced practice activities overall. CONCLUSION: Essential and appropriate use of advanced practice nurses requires clarity in defining roles and practice levels. This research delineated nursing work according to grade and level of practice, further validating the tool for the Queensland context and providing operational information for assigning innovative nursing service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-efficacy has two cognitive components, efficacy expectations and outcome expectations, and their influence on behavior change is synergistic. Efficacy expectation is effected by four main sources of information provided by direct and indirect experiences. The four sources of information are performance accomplishments, vicarious experience, verbal persuasion and self-appraisal. How to measure and develop interventions is an important issue at present. This article clearly analyzes the relationship between variables of the self-efficacy model and explains the implementation of self-efficacy enhancing interventions and instruments in order to test the model. Through the process of the use of theory and feasibility in clinical practice, it is expected that professional medical care personnel should firstly familiarize themselves with the self-efficiency model and concept, and then flexibly promote it in professional fields clinical practice, chronic disease care and health promotion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine which capabilities technologies provide to support collaborative process modeling. We develop a model that explains how technology capabilities impact cognitive group processes, and how they lead to improved modeling outcomes and positive technology beliefs. We test this model through a free simulation experiment of collaborative process modelers structured around a set of modeling tasks. With our study, we provide an understanding of the process of collaborative process modeling, and detail implications for research and guidelines for the practical design of collaborative process modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and delivering Value for Money (VfM). As part of the background to this challenge, a critique is given of current practice in the selection of the approach to procure major public sector infrastructure in Australia and which is akin to the Multi-Attribute Utility Approach (MAUA). To contribute towards addressing the key weaknesses of MAUA, a new first-order procurement decision-making model is presented. The model addresses the make-or-buy decision (risk allocation); the bundling decision (property rights incentives), as well as the exchange relationship decision (relational to arms-length exchange) in its novel approach to articulating a procurement strategy designed to yield superior VfM across the whole life of the asset. The aim of this paper is report on the development of this decisionmaking model in terms of the procedural tasks to be followed and the method being used to test the model. The planned approach to testing the model uses a sample of 87 Australian major infrastructure projects in the sum of AUD32 billion and deploys a key proxy for VfM comprising expressions of interest, as an indicator of competition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pavlovian fear conditioning is a robust technique for examining behavioral and cellular components of fear learning and memory. In fear conditioning, the subject learns to associate a previously neutral stimulus with an inherently noxious co-stimulus. The learned association is reflected in the subjects' behavior upon subsequent re-exposure to the previously neutral stimulus or the training environment. Using fear conditioning, investigators can obtain a large amount of data that describe multiple aspects of learning and memory. In a single test, researchers can evaluate functional integrity in fear circuitry, which is both well characterized and highly conserved across species. Additionally, the availability of sensitive and reliable automated scoring software makes fear conditioning amenable to high-throughput experimentation in the rodent model; thus, this model of learning and memory is particularly useful for pharmacological and toxicological screening. Due to the conserved nature of fear circuitry across species, data from Pavlovian fear conditioning are highly translatable to human models. We describe equipment and techniques needed to perform and analyze conditioned fear data. We provide two examples of fear conditioning experiments, one in rats and one in mice, and the types of data that can be collected in a single experiment. © 2012 Springer Science+Business Media, LLC.