306 resultados para MIB Data Analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reports on the analysis of qualitative and quantitative data concerning Australian teachers’ motivations for taking up, remaining in, or leaving teaching positions in rural and regional schools. The data were collected from teachers (n = 2940) as part of the SiMERR National Survey, though the results of the qualitative data analysis were not published with the survey report in 2006. The teachers’ comments provide additional insight into their career decisions, complementing the quantitative findings. Content and frequency analyses of the teachers’ comments reveal individual and collective priorities which together with the statistical evidence can be used to inform policies aimed at addressing the staffing needs of rural schools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: This paper describes research conducted with Big hART, Australia's most awarded participatory arts company. It considers three projects, LUCKY, GOLD and NGAPARTJI NGAPARTJI across separate sites in Tasmania, Western NSW and Northern Territory, respectively, in order to understand project impact from the perspective of project participants, Arts workers, community members and funders. Methods: Semi-structured interviews were conducted with 29 respondents. The data were coded thematically and analysed using the constant comparative method of qualitative data analysis. Results: Seven broad domains of change were identified: psychosocial health; community; agency and behavioural change; the Art; economic effect; learning and identity. Conclusions: Experiences of participatory arts are interrelated in an ecology of practice that is iterative, relational, developmental, temporal and contextually bound. This means that questions of impact are contingent, and there is no one path that participants travel or single measure that can adequately capture the richness and diversity of experience. Consequently, it is the productive tensions between the domains of change that are important and the way they are animated through Arts practice that provides sign posts towards the impact of Big hART projects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of hierarchical Bayesian spatial models in the analysis of ecological data is increasingly prevalent. The implementation of these models has been heretofore limited to specifically written software that required extensive programming knowledge to create. The advent of WinBUGS provides access to Bayesian hierarchical models for those without the programming expertise to create their own models and allows for the more rapid implementation of new models and data analysis. This facility is demonstrated here using data collected by the Missouri Department of Conservation for the Missouri Turkey Hunting Survey of 1996. Three models are considered, the first uses the collected data to estimate the success rate for individual hunters at the county level and incorporates a conditional autoregressive (CAR) spatial effect. The second model builds upon the first by simultaneously estimating the success rate and harvest at the county level, while the third estimates the success rate and hunting pressure at the county level. These models are discussed in detail as well as their implementation in WinBUGS and the issues arising therein. Future areas of application for WinBUGS and the latest developments in WinBUGS are discussed as well.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of graphical processing unit (GPU) parallel processing is becoming a part of mainstream statistical practice. The reliance of Bayesian statistics on Markov Chain Monte Carlo (MCMC) methods makes the applicability of parallel processing not immediately obvious. It is illustrated that there are substantial gains in improved computational time for MCMC and other methods of evaluation by computing the likelihood using GPU parallel processing. Examples use data from the Global Terrorism Database to model terrorist activity in Colombia from 2000 through 2010 and a likelihood based on the explicit convolution of two negative-binomial processes. Results show decreases in computational time by a factor of over 200. Factors influencing these improvements and guidelines for programming parallel implementations of the likelihood are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Structural investigations of large biomolecules in the gas phase are challenging. Herein, it is reported that action spectroscopy taking advantage of facile carbon-iodine bond dissociation can be used to examine the structures of large molecules, including whole proteins. Iodotyrosine serves as the active chromophore, which yields distinctive spectra depending on the solvation of the side chain by the remainder of the molecule. Isolation of the chromophore yields a double featured peak at ∼290 nm, which becomes a single peak with increasing solvation. Deprotonation of the side chain also leads to reduced apparent intensity and broadening of the action spectrum. The method can be successfully applied to both negatively and positively charged ions in various charge states, although electron detachment becomes a competitive channel for multiply charged anions. In all other cases, loss of iodine is by far the dominant channel which leads to high sensitivity and simple data analysis. The action spectra for iodotyrosine, the iodinated peptides KGYDAKA, DAYLDAG, and the small protein ubiquitin are reported in various charge states. © 2012 American Chemical Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a summary of the key findings of the TTF TPACK Survey developed and administered for the Teaching the Teachers for the Future (TTF) Project implemented in 2011. The TTF Project, funded by an Australian Government ICT Innovation Fund grant, involved all 39 Australian Higher Education Institutions which provide initial teacher education. TTF data collections were undertaken at the end of Semester 1 (T1) and at the end of Semester 2 (T2) in 2011. A total of 12881 participants completed the first survey (T1) and 5809 participants completed the second survey (T2). Groups of like-named items from the T1 survey were subject to a battery of complementary data analysis techniques. The psychometric properties of the four scales: Confidence - teacher items; Usefulness - teacher items; Confidence - student items; Usefulness- student items, were confirmed both at T1 and T2. Among the key findings summarised, at the national level, the scale: Confidence to use ICT as a teacher showed measurable growth across the whole scale from T1 to T2, and the scale: Confidence to facilitate student use of ICT also showed measurable growth across the whole scale from T1 to T2. Additional key TTF TPACK Survey findings are summarised.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Obtaining attribute values of non-chosen alternatives in a revealed preference context is challenging because non-chosen alternative attributes are unobserved by choosers, chooser perceptions of attribute values may not reflect reality, existing methods for imputing these values suffer from shortcomings, and obtaining non-chosen attribute values is resource intensive. This paper presents a unique Bayesian (multiple) Imputation Multinomial Logit model that imputes unobserved travel times and distances of non-chosen travel modes based on random draws from the conditional posterior distribution of missing values. The calibrated Bayesian (multiple) Imputation Multinomial Logit model imputes non-chosen time and distance values that convincingly replicate observed choice behavior. Although network skims were used for calibration, more realistic data such as supplemental geographically referenced surveys or stated preference data may be preferred. The model is ideally suited for imputing variation in intrazonal non-chosen mode attributes and for assessing the marginal impacts of travel policies, programs, or prices within traffic analysis zones.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ethnographic methods have been widely used for requirements elicitation purposes in systems design, especially when the focus is on understanding users? social, cultural and political contexts. Designing an on-line search engine for peer-reviewed papers could be a challenge considering the diversity of its end users coming from different educational and professional disciplines. This poster describes our exploration of academic research environments based on different in situ methods such as contextual interviews, diary-keeping, job-shadowing, etc. The data generated from these methods is analysed using a qualitative data analysis software and subsequently is used for developing personas that could be used as a requirements specification tool.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present fully Bayesian experimental designs for nonlinear mixed effects models, in which we develop simulation-based optimal design methods to search over both continuous and discrete design spaces. Although Bayesian inference has commonly been performed on nonlinear mixed effects models, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. In this paper, the design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and require optimisation using the methods presented in this paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Heatwaves could cause the population excess death numbers to be ranged from tens to thousands within a couple of weeks in a local area. An excess mortality due to a special event (e.g., a heatwave or an epidemic outbreak) is estimated by subtracting the mortality figure under ‘normal’ conditions from the historical daily mortality records. The calculation of the excess mortality is a scientific challenge because of the stochastic temporal pattern of the daily mortality data which is characterised by (a) the long-term changing mean levels (i.e., non-stationarity); (b) the non-linear temperature-mortality association. The Hilbert-Huang Transform (HHT) algorithm is a novel method originally developed for analysing the non-linear and non-stationary time series data in the field of signal processing, however, it has not been applied in public health research. This paper aimed to demonstrate the applicability and strength of the HHT algorithm in analysing health data. Methods Special R functions were developed to implement the HHT algorithm to decompose the daily mortality time series into trend and non-trend components in terms of the underlying physical mechanism. The excess mortality is calculated directly from the resulting non-trend component series. Results The Brisbane (Queensland, Australia) and the Chicago (United States) daily mortality time series data were utilized for calculating the excess mortality associated with heatwaves. The HHT algorithm estimated 62 excess deaths related to the February 2004 Brisbane heatwave. To calculate the excess mortality associated with the July 1995 Chicago heatwave, the HHT algorithm needed to handle the mode mixing issue. The HHT algorithm estimated 510 excess deaths for the 1995 Chicago heatwave event. To exemplify potential applications, the HHT decomposition results were used as the input data for a subsequent regression analysis, using the Brisbane data, to investigate the association between excess mortality and different risk factors. Conclusions The HHT algorithm is a novel and powerful analytical tool in time series data analysis. It has a real potential to have a wide range of applications in public health research because of its ability to decompose a nonlinear and non-stationary time series into trend and non-trend components consistently and efficiently.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis is a country-level study on the institutional and human capital determinants of growth-aspiration entrepreneurial activity. By using country-level panel-data analysis, the study is to our knowledge the first to test to what extent country-level human capital accumulation is associated with the prevalence of growth-aspiration entrepreneurship. Overall findings of the study suggest that there are different effects of the institutional determinants on the prevalence of growth-aspiration entrepreneurship in developing countries and developed countries. The study also found that country-level human capital moderates the effects of the institutional environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new test of hypothesis for classifying stationary time series based on the bias-adjusted estimators of the fitted autoregressive model is proposed. It is shown theoretically that the proposed test has desirable properties. Simulation results show that when time series are short, the size and power estimates of the proposed test are reasonably good, and thus this test is reliable in discriminating between short-length time series. As the length of the time series increases, the performance of the proposed test improves, but the benefit of bias-adjustment reduces. The proposed hypothesis test is applied to two real data sets: the annual real GDP per capita of six European countries, and quarterly real GDP per capita of five European countries. The application results demonstrate that the proposed test displays reasonably good performance in classifying relatively short time series.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Uniform DNA distribution in tumors is a prerequisite step for high transfection efficiency in solid tumors. To improve the transfection efficiency of electrically assisted gene delivery to solid tumors in vivo, we explored how tumor histological properties affected transfection efficiency. In four different tumor types (B16F1, EAT, SA-1 and LPB), proteoglycan and collagen content was morphometrically analyzed, and cell size and cell density were determined in paraffin-embedded tumor sections under a transmission microscope. To demonstrate the influence of the histological properties of solid tumors on electrically assisted gene delivery, the correlation between histological properties and transfection efficiency with regard to the time interval between DNA injection and electroporation was determined. Our data demonstrate that soft tumors with larger spherical cells, low proteoglycan and collagen content, and low cell density are more effectively transfected (B16F1 and EAT) than rigid tumors with high proteoglycan and collagen content, small spindle-shaped cells and high cell density (LPB and SA-1). Furthermore, an optimal time interval for increased transfection exists only in soft tumors, this being in the range of 5-15 min. Therefore, knowledge about the histology of tumors is important in planning electrogene therapy with respect to the time interval between DNA injection and electroporation.