864 resultados para HVAC data analysis
Resumo:
Urban road dust comprises of a range of potentially toxic metal elements and plays a critical role in degrading urban receiving water quality. Hence, assessing the metal composition and concentration in urban road dust is a high priority. This study investigated the variability of metal composition and concentrations in road dust in 4 different urban land uses in Gold Coast, Australia. Samples from 16 road sites were collected and tested for selected 12 metal species. The data set was analyzed using both univariate and multivariate techniques. Outcomes of the data analysis revealed that the metal concentrations in road dust differ considerably within and between different land uses. Iron, aluminum, magnesium and zinc are the most abundant in urban land uses. It was also noted that metal species such as titanium, nickel, copper and zinc have the highest concentrations in industrial land use. The study outcomes revealed that soil and traffic related sources as key sources of metals deposited on road surfaces.
Resumo:
Flows of cultural heritage in textual practices are vital to sustaining Indigenous communities. Indigenous heritage, whether passed on by oral tradition or ubiquitous social media, can be seen as a “conversation between the past and the future” (Fairclough, 2012, xv). Indigenous heritage involves appropriating memories within a cultural flow to pass on a spiritual legacy. This presentation reports ethnographic research of social media practices in a small independent Aboriginal school in Southeast Queensland, Australia that is resided over by the Yugambeh elders and an Aboriginal principal. The purpose was to rupture existing notions of white literacies in schools, and to deterritorialize the uses of digital media by dominant cultures in the public sphere. Examples of learning experiences included the following: i. Integrating Indigenous language and knowledge into media text production; ii. Using conversations with Indigenous elders and material artifacts as an entry point for storytelling; iii. Dadirri – spiritual listening in the yarning circle to develop storytelling (Ungunmerr-Baumann, 2002); and iv. Writing and publicly sharing oral histories through digital scrapbooking shared via social media. The program aligned with the Australian National Curriculum English (ACARA, 2012), which mandates the teaching of multimodal text creation. Data sources included a class set of digital scrapbooks collaboratively created in a multi-age primary classroom. The digital scrapbooks combined digitally encoded words, images of material artifacts, and digital music files. A key feature of the writing and digital design task was to retell and digitally display and archive a cultural narrative of significance to the Indigenous Australian community and its memories and material traces of the past for the future. Data analysis of the students’ digital stories involved the application of key themes of negotiated, material, and digitally mediated forms of heritage practice. It drew on Australian Indigenous research by Keddie et al. (2013) to guard against the homogenizing of culture that can arise from a focus on a static view of culture. The interpretation of findings located Indigenous appropriation of social media within broader racialized politics that enables Indigenous literacy to be understood as a dynamic, negotiated, and transgenerational flows of practice. The findings demonstrate that Indigenous children’s use of media production reflects “shifting and negotiated identities” in response to changing media environments that can function to sustain Indigenous cultural heritages (Appadurai, 1696, xv). It demonstrated how the children’s experiences of culture are layered over time, as successive generations inherit, interweave, and hear others’ cultural stories or maps. It also demonstrated how the children’s production of narratives through multimedia can provide a platform for the flow and reconstruction of performative collective memories and “lived traces of a common past” (Giaccardi, 2012). It disrupts notions of cultural reductionism and racial incommensurability that fix and homogenize Indigenous practices within and against a dominant White norm. Recommendations are provided for an approach to appropriating social media in schools that explicitly attends to the dynamic nature of Indigenous practices, negotiated through intercultural constructions and flows, and opening space for a critical anti-racist approach to multimodal text production.
Resumo:
This paper reports on the analysis of qualitative and quantitative data concerning Australian teachers’ motivations for taking up, remaining in, or leaving teaching positions in rural and regional schools. The data were collected from teachers (n = 2940) as part of the SiMERR National Survey, though the results of the qualitative data analysis were not published with the survey report in 2006. The teachers’ comments provide additional insight into their career decisions, complementing the quantitative findings. Content and frequency analyses of the teachers’ comments reveal individual and collective priorities which together with the statistical evidence can be used to inform policies aimed at addressing the staffing needs of rural schools.
Resumo:
Background: This paper describes research conducted with Big hART, Australia's most awarded participatory arts company. It considers three projects, LUCKY, GOLD and NGAPARTJI NGAPARTJI across separate sites in Tasmania, Western NSW and Northern Territory, respectively, in order to understand project impact from the perspective of project participants, Arts workers, community members and funders. Methods: Semi-structured interviews were conducted with 29 respondents. The data were coded thematically and analysed using the constant comparative method of qualitative data analysis. Results: Seven broad domains of change were identified: psychosocial health; community; agency and behavioural change; the Art; economic effect; learning and identity. Conclusions: Experiences of participatory arts are interrelated in an ecology of practice that is iterative, relational, developmental, temporal and contextually bound. This means that questions of impact are contingent, and there is no one path that participants travel or single measure that can adequately capture the richness and diversity of experience. Consequently, it is the productive tensions between the domains of change that are important and the way they are animated through Arts practice that provides sign posts towards the impact of Big hART projects.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
The use of hierarchical Bayesian spatial models in the analysis of ecological data is increasingly prevalent. The implementation of these models has been heretofore limited to specifically written software that required extensive programming knowledge to create. The advent of WinBUGS provides access to Bayesian hierarchical models for those without the programming expertise to create their own models and allows for the more rapid implementation of new models and data analysis. This facility is demonstrated here using data collected by the Missouri Department of Conservation for the Missouri Turkey Hunting Survey of 1996. Three models are considered, the first uses the collected data to estimate the success rate for individual hunters at the county level and incorporates a conditional autoregressive (CAR) spatial effect. The second model builds upon the first by simultaneously estimating the success rate and harvest at the county level, while the third estimates the success rate and hunting pressure at the county level. These models are discussed in detail as well as their implementation in WinBUGS and the issues arising therein. Future areas of application for WinBUGS and the latest developments in WinBUGS are discussed as well.
Resumo:
The use of graphical processing unit (GPU) parallel processing is becoming a part of mainstream statistical practice. The reliance of Bayesian statistics on Markov Chain Monte Carlo (MCMC) methods makes the applicability of parallel processing not immediately obvious. It is illustrated that there are substantial gains in improved computational time for MCMC and other methods of evaluation by computing the likelihood using GPU parallel processing. Examples use data from the Global Terrorism Database to model terrorist activity in Colombia from 2000 through 2010 and a likelihood based on the explicit convolution of two negative-binomial processes. Results show decreases in computational time by a factor of over 200. Factors influencing these improvements and guidelines for programming parallel implementations of the likelihood are discussed.
Resumo:
Structural investigations of large biomolecules in the gas phase are challenging. Herein, it is reported that action spectroscopy taking advantage of facile carbon-iodine bond dissociation can be used to examine the structures of large molecules, including whole proteins. Iodotyrosine serves as the active chromophore, which yields distinctive spectra depending on the solvation of the side chain by the remainder of the molecule. Isolation of the chromophore yields a double featured peak at ∼290 nm, which becomes a single peak with increasing solvation. Deprotonation of the side chain also leads to reduced apparent intensity and broadening of the action spectrum. The method can be successfully applied to both negatively and positively charged ions in various charge states, although electron detachment becomes a competitive channel for multiply charged anions. In all other cases, loss of iodine is by far the dominant channel which leads to high sensitivity and simple data analysis. The action spectra for iodotyrosine, the iodinated peptides KGYDAKA, DAYLDAG, and the small protein ubiquitin are reported in various charge states. © 2012 American Chemical Society.
Resumo:
This paper presents a summary of the key findings of the TTF TPACK Survey developed and administered for the Teaching the Teachers for the Future (TTF) Project implemented in 2011. The TTF Project, funded by an Australian Government ICT Innovation Fund grant, involved all 39 Australian Higher Education Institutions which provide initial teacher education. TTF data collections were undertaken at the end of Semester 1 (T1) and at the end of Semester 2 (T2) in 2011. A total of 12881 participants completed the first survey (T1) and 5809 participants completed the second survey (T2). Groups of like-named items from the T1 survey were subject to a battery of complementary data analysis techniques. The psychometric properties of the four scales: Confidence - teacher items; Usefulness - teacher items; Confidence - student items; Usefulness- student items, were confirmed both at T1 and T2. Among the key findings summarised, at the national level, the scale: Confidence to use ICT as a teacher showed measurable growth across the whole scale from T1 to T2, and the scale: Confidence to facilitate student use of ICT also showed measurable growth across the whole scale from T1 to T2. Additional key TTF TPACK Survey findings are summarised.
Resumo:
Obtaining attribute values of non-chosen alternatives in a revealed preference context is challenging because non-chosen alternative attributes are unobserved by choosers, chooser perceptions of attribute values may not reflect reality, existing methods for imputing these values suffer from shortcomings, and obtaining non-chosen attribute values is resource intensive. This paper presents a unique Bayesian (multiple) Imputation Multinomial Logit model that imputes unobserved travel times and distances of non-chosen travel modes based on random draws from the conditional posterior distribution of missing values. The calibrated Bayesian (multiple) Imputation Multinomial Logit model imputes non-chosen time and distance values that convincingly replicate observed choice behavior. Although network skims were used for calibration, more realistic data such as supplemental geographically referenced surveys or stated preference data may be preferred. The model is ideally suited for imputing variation in intrazonal non-chosen mode attributes and for assessing the marginal impacts of travel policies, programs, or prices within traffic analysis zones.
Resumo:
Ethnographic methods have been widely used for requirements elicitation purposes in systems design, especially when the focus is on understanding users? social, cultural and political contexts. Designing an on-line search engine for peer-reviewed papers could be a challenge considering the diversity of its end users coming from different educational and professional disciplines. This poster describes our exploration of academic research environments based on different in situ methods such as contextual interviews, diary-keeping, job-shadowing, etc. The data generated from these methods is analysed using a qualitative data analysis software and subsequently is used for developing personas that could be used as a requirements specification tool.
Resumo:
In this paper, we present fully Bayesian experimental designs for nonlinear mixed effects models, in which we develop simulation-based optimal design methods to search over both continuous and discrete design spaces. Although Bayesian inference has commonly been performed on nonlinear mixed effects models, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. In this paper, the design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and require optimisation using the methods presented in this paper.
Resumo:
Background Heatwaves could cause the population excess death numbers to be ranged from tens to thousands within a couple of weeks in a local area. An excess mortality due to a special event (e.g., a heatwave or an epidemic outbreak) is estimated by subtracting the mortality figure under ‘normal’ conditions from the historical daily mortality records. The calculation of the excess mortality is a scientific challenge because of the stochastic temporal pattern of the daily mortality data which is characterised by (a) the long-term changing mean levels (i.e., non-stationarity); (b) the non-linear temperature-mortality association. The Hilbert-Huang Transform (HHT) algorithm is a novel method originally developed for analysing the non-linear and non-stationary time series data in the field of signal processing, however, it has not been applied in public health research. This paper aimed to demonstrate the applicability and strength of the HHT algorithm in analysing health data. Methods Special R functions were developed to implement the HHT algorithm to decompose the daily mortality time series into trend and non-trend components in terms of the underlying physical mechanism. The excess mortality is calculated directly from the resulting non-trend component series. Results The Brisbane (Queensland, Australia) and the Chicago (United States) daily mortality time series data were utilized for calculating the excess mortality associated with heatwaves. The HHT algorithm estimated 62 excess deaths related to the February 2004 Brisbane heatwave. To calculate the excess mortality associated with the July 1995 Chicago heatwave, the HHT algorithm needed to handle the mode mixing issue. The HHT algorithm estimated 510 excess deaths for the 1995 Chicago heatwave event. To exemplify potential applications, the HHT decomposition results were used as the input data for a subsequent regression analysis, using the Brisbane data, to investigate the association between excess mortality and different risk factors. Conclusions The HHT algorithm is a novel and powerful analytical tool in time series data analysis. It has a real potential to have a wide range of applications in public health research because of its ability to decompose a nonlinear and non-stationary time series into trend and non-trend components consistently and efficiently.
Resumo:
The thesis is a country-level study on the institutional and human capital determinants of growth-aspiration entrepreneurial activity. By using country-level panel-data analysis, the study is to our knowledge the first to test to what extent country-level human capital accumulation is associated with the prevalence of growth-aspiration entrepreneurship. Overall findings of the study suggest that there are different effects of the institutional determinants on the prevalence of growth-aspiration entrepreneurship in developing countries and developed countries. The study also found that country-level human capital moderates the effects of the institutional environment.
Resumo:
A new test of hypothesis for classifying stationary time series based on the bias-adjusted estimators of the fitted autoregressive model is proposed. It is shown theoretically that the proposed test has desirable properties. Simulation results show that when time series are short, the size and power estimates of the proposed test are reasonably good, and thus this test is reliable in discriminating between short-length time series. As the length of the time series increases, the performance of the proposed test improves, but the benefit of bias-adjustment reduces. The proposed hypothesis test is applied to two real data sets: the annual real GDP per capita of six European countries, and quarterly real GDP per capita of five European countries. The application results demonstrate that the proposed test displays reasonably good performance in classifying relatively short time series.