914 resultados para Australian PhD data
Resumo:
Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.
Resumo:
This paper presents a practical framework to synthesize multi-sensor navigation information for localization of a rotary-wing unmanned aerial vehicle (RUAV) and estimation of unknown ship positions when the RUAV approaches the landing deck. The estimation performance of the visual tracking sensor can also be improved through integrated navigation. Three different sensors (inertial navigation, Global Positioning System, and visual tracking sensor) are utilized complementarily to perform the navigation tasks for the purpose of an automatic landing. An extended Kalman filter (EKF) is developed to fuse data from various navigation sensors to provide the reliable navigation information. The performance of the fusion algorithm has been evaluated using real ship motion data. Simulation results suggest that the proposed method can be used to construct a practical navigation system for a UAV-ship landing system.
Resumo:
The motivation of the study stems from the results reported in the Excellence in Research for Australia (ERA) 2010 report. The report showed that only 12 universities performed research at or above international standards, of which, the Group of Eight (G8) universities filled the top eight spots. While performance of universities was based on number of research outputs, total amount of research income and other quantitative indicators, the measure of efficiency or productivity was not considered. The objectives of this paper are twofold. First, to provide a review of the research performance of 37 Australian universities using the data envelopment analysis (DEA) bootstrap approach of Simar and Wilson (2007). Second, to determine sources of productivity drivers by regressing the efficiency scores against a set of environmental variables.
Resumo:
The Queensland Building Services Authority (QBSA) regulates the construction industry in Queensland, Australia, with licensing requirements creating differential financial reporting obligations, depending on firm size. Economic theories of regulation and behaviour provide a framework for investigating effects of the financial constraints and financial reporting requirements imposed by QBSA licensing. Data are analysed for all small and medium construction entities operating in Queensland between 2001 and 2006. Findings suggesting that construction licensees are categorizing themselves as smaller to avoid the more onerous and costly financial reporting of higher licensee categories are consistent with US findings from the 2002 Sarbanes-Oxley (SOX) regulation which created incentives for small firms to stay small to avoid the costs of compliance with more onerous financial reporting requirements. Such behaviour can have the undesirable economic consequences of adversely affecting employment, investment, wealth creation and financial stability. Insights and implications from the analysed QBSA processes are important for future policy reform and design, and useful to be considered where similar regulatory approaches are planned.
Resumo:
This special issue of the Journal of Urban Technology brings together five articles that are based on presentations given at the Street Computing workshop held on 24 November 2009 in Melbourne in conjunction with the Australian Computer-Human Interaction conference (OZCHI 2009). Our own article introduces the Street Computing vision and explores the potential, challenges and foundations of this research vision. In order to do so, we first look at the currently available sources of information and discuss their link to existing research efforts. Section 2 then introduces the notion of Street Computing and our research approach in more detail. Section 3 looks beyond the core concept itself and summarises related work in this field of interest.
Resumo:
This research is one of several ongoing studies conducted within the IT Professional Services (ITPS) research programme at Queensland University of Technology (QUT). In 2003, ITPS introduced the IS-Impact model, a measurement model for measuring information systems success from the viewpoint of multiple stakeholders. The model, along with its instrument, is robust, simple, yet generalisable, and yields results that are comparable across time, stakeholders, different systems and system contexts. The IS-Impact model is defined as “a measure at a point in time, of the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. The model represents four dimensions, which are ‘Individual Impact’, ‘Organizational Impact’, ‘Information Quality’ and ‘System Quality’. The two Impact dimensions measure the up-to-date impact of the evaluated system, while the remaining two Quality dimensions act as proxies for probable future impacts (Gable, Sedera & Chan, 2008). To fulfil the goal of ITPS, “to develop the most widely employed model” this research re-validates and extends the IS-Impact model in a new context. This method/context-extension research aims to test the generalisability of the model by addressing known limitations of the model. One of the limitations of the model relates to the extent of external validity of the model. In order to gain wide acceptance, a model should be consistent and work well in different contexts. The IS-Impact model, however, was only validated in the Australian context, and packaged software was chosen as the IS understudy. Thus, this study is concerned with whether the model can be applied in another different context. Aiming for a robust and standardised measurement model that can be used across different contexts, this research re-validates and extends the IS-Impact model and its instrument to public sector organisations in Malaysia. The overarching research question (managerial question) of this research is “How can public sector organisations in Malaysia measure the impact of information systems systematically and effectively?” With two main objectives, the managerial question is broken down into two specific research questions. The first research question addresses the applicability (relevance) of the dimensions and measures of the IS-Impact model in the Malaysian context. Moreover, this research question addresses the completeness of the model in the new context. Initially, this research assumes that the dimensions and measures of the IS-Impact model are sufficient for the new context. However, some IS researchers suggest that the selection of measures needs to be done purposely for different contextual settings (DeLone & McLean, 1992, Rai, Lang & Welker, 2002). Thus, the first research question is as follows, “Is the IS-Impact model complete for measuring the impact of IS in Malaysian public sector organisations?” [RQ1]. The IS-Impact model is a multidimensional model that consists of four dimensions or constructs. Each dimension is represented by formative measures or indicators. Formative measures are known as composite variables because these measures make up or form the construct, or, in this case, the dimension in the IS-Impact model. These formative measures define different aspects of the dimension, thus, a measurement model of this kind needs to be tested not just on the structural relationship between the constructs but also the validity of each measure. In a previous study, the IS-Impact model was validated using formative validation techniques, as proposed in the literature (i.e., Diamantopoulos and Winklhofer, 2001, Diamantopoulos and Siguaw, 2006, Petter, Straub and Rai, 2007). However, there is potential for improving the validation testing of the model by adding more criterion or dependent variables. This includes identifying a consequence of the IS-Impact construct for the purpose of validation. Moreover, a different approach is employed in this research, whereby the validity of the model is tested using the Partial Least Squares (PLS) method, a component-based structural equation modelling (SEM) technique. Thus, the second research question addresses the construct validation of the IS-Impact model; “Is the IS-Impact model valid as a multidimensional formative construct?” [RQ2]. This study employs two rounds of surveys, each having a different and specific aim. The first is qualitative and exploratory, aiming to investigate the applicability and sufficiency of the IS-Impact dimensions and measures in the new context. This survey was conducted in a state government in Malaysia. A total of 77 valid responses were received, yielding 278 impact statements. The results from the qualitative analysis demonstrate the applicability of most of the IS-Impact measures. The analysis also shows a significant new measure having emerged from the context. This new measure was added as one of the System Quality measures. The second survey is a quantitative survey that aims to operationalise the measures identified from the qualitative analysis and rigorously validate the model. This survey was conducted in four state governments (including the state government that was involved in the first survey). A total of 254 valid responses were used in the data analysis. Data was analysed using structural equation modelling techniques, following the guidelines for formative construct validation, to test the validity and reliability of the constructs in the model. This study is the first research that extends the complete IS-Impact model in a new context that is different in terms of nationality, language and the type of information system (IS). The main contribution of this research is to present a comprehensive, up-to-date IS-Impact model, which has been validated in the new context. The study has accomplished its purpose of testing the generalisability of the IS-Impact model and continuing the IS evaluation research by extending it in the Malaysian context. A further contribution is a validated Malaysian language IS-Impact measurement instrument. It is hoped that the validated Malaysian IS-Impact instrument will encourage related IS research in Malaysia, and that the demonstrated model validity and generalisability will encourage a cumulative tradition of research previously not possible. The study entailed several methodological improvements on prior work, including: (1) new criterion measures for the overall IS-Impact construct employed in ‘identification through measurement relations’; (2) a stronger, multi-item ‘Satisfaction’ construct, employed in ‘identification through structural relations’; (3) an alternative version of the main survey instrument in which items are randomized (rather than blocked) for comparison with the main survey data, in attention to possible common method variance (no significant differences between these two survey instruments were observed); (4) demonstrates a validation process of formative indexes of a multidimensional, second-order construct (existing examples mostly involved unidimensional constructs); (5) testing the presence of suppressor effects that influence the significance of some measures and dimensions in the model; and (6) demonstrates the effect of an imbalanced number of measures within a construct to the contribution power of each dimension in a multidimensional model.
Resumo:
Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.
Resumo:
In response to the need to leverage private finance and the lack of competition in some parts of the Australian public sector infrastructure market, especially in the very large economic infrastructure sector procured using Pubic Private Partnerships, the Australian Federal government has demonstrated its desire to attract new sources of in-bound foreign direct investment (FDI). This paper aims to report on progress towards an investigation into the determinants of multinational contractors’ willingness to bid for Australian public sector major infrastructure projects. This research deploys Dunning’s eclectic theory for the first time in terms of in-bound FDI by multinational contractors into Australia. Elsewhere, the authors have developed Dunning’s principal hypothesis to suit the context of this research and to address a weakness arising in this hypothesis that is based on a nominal approach to the factors in Dunning's eclectic framework and which fails to speak to the relative explanatory power of these factors. In this paper, a first stage test of the authors' development of Dunning's hypothesis is presented by way of an initial review of secondary data vis-à-vis the selected sector (roads and bridges) in Australia (as the host location) and with respect to four selected home countries (China; Japan; Spain; and US). In doing so, the next stage in the research method concerning sampling and case studies is also further developed and described in this paper. In conclusion, the extent to which the initial review of secondary data suggests the relative importance of the factors in the eclectic framework is considered. It is noted that more robust conclusions are expected following the future planned stages of the research including primary data from the case studies and a global survey of the world’s largest contractors and which is briefly previewed. Finally, and beyond theoretical contributions expected from the overall approach taken to developing and testing Dunning’s framework, other expected contributions concerning research method and practical implications are mentioned.
Resumo:
Australian climate, soils and agricultural management practices are significantly different from those of the northern hemisphere nations. Consequently, experimental data on greenhouse gas production from European and North American agricultural soils and its interpretation are unlikely to be directly applicable to Australian systems. A programme of studies of non-CO2 greenhouse gas emissions from agriculture has been established that is designed to reduce uncertainty of non-CO2 greenhouse gas emissions in the Australian National Greenhouse Gas Inventory and provide outputs that will enable better on-farm management practices for reducing non-CO2 greenhouse gas emissions, particularly nitrous oxide. The systems being examined and their locations are irrigated pasture (Kyabram Victoria), irrigated cotton (Narrabri, NSW), irrigated maize (Griffith, NSW), rain-fed wheat (Rutherglen, Victoria) and rain-fed wheat (Cunderdin, WA). The field studies include treatments with and without fertilizer addition, stubble burning versus stubble retention, conventional cultivation versus direct drilling and crop rotation to determine emission factors and treatment possibilities for best management options. The data to date suggest that nitrous oxide emissions from nitrogen fertilizer, applied to irrigated dairy pastures and rain-fed winter wheat, appear much lower than the average of northern hemisphere grain and pasture studies. More variable emissions have been found in studies of irrigated cotton/vetch/wheat rotation and substantially higher emissions from irrigated maize.
Resumo:
Introduction: Emerging evidence reveals that early feeding practices are associated with child food intake, eating behaviour and weight status. This cross-sectional analysis examined the association between maternal infant feeding practices/beliefs and child weight in Australian infants aged 11-17 months. Methods: Participants were 293 first-time mothers of healthy term infants (144 boys, mean age 14±1 months) enrolled in the NOURISH RCT. Mothers self-reported infant feeding practices and beliefs using the Infant Feeding Questionnaire (Baughcum, 2001). Anthropometric data were also measured at baseline (infants aged 4 months). Multiple regression analysis was used, adjusting for infant age, gender, birth weight, infant feeding mode (breast vs. formula), maternal perceptions of infant weight status, pre-pregnancy weight, weight concern, age and education. Results: The average child weight-for-age z-score (WAZ) was 0.62±0.83 (range:-1.56 to 2.94) and the mean change in WAZ (WAZ change) from 4 to 14 months was 0.62±0.69 (range:-1.50 to 2.76). Feeding practices/beliefs partly explained child WAZ (R2=0.28) and WAZ change (R2=0.13) in the adjusted models. While child weight status at 14 months was inversely associated with responsive feeding (e.g. baby feeds whenever she wants, feeding to stop baby being unsettled) (β=-0.104, p=0.06) and maternal concern about the child becoming underweight (β=-0.224, p<0.001), it was positively associated with mother’s concern about child overweight (β=0.197, p<0.05). Birth weight, infant’s age, maternal weight concern and perceiving her child as overweight were significant covariates. WAZ change was only significantly associated with responsive feeding (β=-0.147, p<0.05). Conclusion: Responsive feeding may be an important strategy to promote healthy child weight.
Resumo:
Swelling social need and competing calls on government funds have heightened the philanthropic dollar’s value. Yet, Australia is not regarded as having a robust giving culture: while 86% of adults give, a mere 16% plan their giving with those who do donating four times as much as spontaneous givers (Giving Australia, 2005). Traditionally, the prime planned giving example is a charitable bequest, a revenue stream not prevalent here (Baker, 2007). In fact, Baker’s Victorian probate data shows under 5% of estates provide a charitable bequest and just over 1% of estate assets is bequeathed. The UK, in contrast, sources 30% and the US 10% of charitable income through bequests (NCVO, 2004; Sargeant, Wymer and Hilton,2006). Australian charities could boost bequest giving. Understanding the donor market, which has or may remember them in their will is critical. This paper reports donor perceptions of Australian charities’ bequest communication/ marketing. The data forms part of a wider study of Australian donors’ bequest attitudes and behaviour. Charities spend heavily on bequest promotion, from advertising to personal selling to public relations and promotion. Infrastructure funds are scarce so guidance on what works for donors is important. Guy and Patton (1988) made their classic call for a nonprofit marketing perspective and identify the need for charities to better understand the motivations and behaviour of their supporters. In similar vein, this study aims to improve the way nonprofits and givers interact; and ultimately, enhance the giving experience and thus multiply planned giving participation. Academically, it offers insights to Australian bequest motivations and attitudes not studied empirically before.
Resumo:
Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.
Resumo:
Typical reference year (TRY) weather data is often used to represent the long term weather pattern for building simulation and design. Through the analysis of ten year historical hourly weather data for seven Australian major capital cities using the frequencies procedure of descriptive statistics analysis (by SPSS software), this paper investigates: • the closeness of the typical reference year (TRY) weather data in representing the long term weather pattern; • the variations and common features that may exist between relatively hot and cold years. It is found that for the given set of input data, in comparison with the other weather elements, the discrepancy between TRY and multiple years is much smaller for the dry bulb temperature, relative humidity and global solar irradiance. The overall distribution patterns of key weather elements are also generally similar between the hot and cold years, but with some shift and/or small distortion. There is little common tendency of change between the hot and the cold years for different weather variables at different study locations.
Resumo:
The National Hand Hygiene Initiative, implemented in Australia in 2009, is currently being evaluated for effectiveness and cost-effectiveness by a multidisciplinary team of researchers. Data from a wide range of sources are being harvested to address the research questions. The data are observational and appropriate statistical and economic modelling methods are being used. Decision makers will be provided with new knowledge about how hand hygiene interventions should be organised and what investment decisions are justified. This is novel research and the authors are unaware of any other evaluation of hand hygiene improvement initiatives. This paper describes the evaluation currently underway.
Resumo:
This paper draws on the work of the ‘EU Kids Online’ network funded by the EC (DG Information Society) Safer Internet plus Programme (project code SIP-KEP-321803); see www.eukidsonline.net, and addresses Australian children’s online activities in terms of risk, harm and opportunity. In particular, it draws upon data that indicates that Australian children are more likely to encounter online risks — especially around seeing sexual images, bullying, misuse of personal data and exposure to potentially harmful user-generated content — than is the case with their EU counterparts. Rather than only comparing Australian children with their European equivalents, this paper places the risks experienced by Australian children in the context of the mediation and online protection practices adopted by their parents, and asks about the possible ways in which we might understand data that seems to indicate that Australian children’s experiences of online risk and harm differ significantly from the experiences of their Europe-based peers. In particular, and as an example, this paper sets out to investigate the apparent conundrum through which Australian children appear twice as likely as most European children to have seen sexual images in the past 12 months, but parents are more likely to filter their access to the internet than is the case with most children in the wider EU Kids Online study. Even so, one in four Australian children (25%) believes that what their parents do helps ‘a lot’ to improve their internet experience, and Australian children and their parents are a little less likely to agree about the mediation practices taking place in the family home than is the case in the EU. The AU Kids Online study was carried out as a result of the ARC Centre of Excellence for Creative Industries and Innovation’s funding of a small scale randomised sample (N = 400) of Australian families with at least one child, aged 9–16, who goes online. The report on Risks and safety for Australian children on the internet follows the same format and uses much of the contextual statement around these issues as the ‘county level’ reports produced by the 25 EU nations involved in EU Kids Online, first drafted by Livingstone et al. (2010). The entirely new material is the data itself, along with the analysis of that data.