997 resultados para Projected models
Resumo:
Over the past decade, most Australian universities have moved increasingly towards online course delivery for both undergraduate and graduate programs. In almost all cases, elements of online teaching are part of routine teaching loads. Yet detailed and accurate workload data are not readily available. As a result, institutional policies on academic staff workload are often guided more by untested assumptions about reduction of costs per student unit, rather than being evidence-based, with the result that implementation of new technologies for online teaching has resulted in poorly defined workload expectations. While the academics in this study often revealed a limited understanding of their institutional workload formulas, which in Australia are negotiated between management and the national union through their local branches, the costs of various types of teaching delivery have become a critical issue in a time of increasing student numbers, declining funding, pressures to increase quality and introduce minimum standards of teaching and curriculum, and substantial expenditure on technologies to support e-learning. There have been relatively few studies on the costs associated with workload for online teaching, and even fewer on the more ubiquitous ‘blended’, ‘hybrid’ or ‘flexible’ modes, in which face-to-face teaching is supplemented by online resources and activities. With this in mind the research reported here has attempted to answer the following question: What insights currently inform Australian universities about staff workload when teaching online?
Resumo:
The use of hedonic models to estimate the effects of various factors on house prices is well established. This paper examines a number of international hedonic house price models that seek to quantify the effect of infrastructure charges on new house prices. This work is an important factor in the housing affordability debate, with many governments in high growth areas having user-pays infrastructure charging policies operating in tandem with housing affordability objectives, with no empirical evidence on the impact of one on the other. This research finds there is little consistency between existing models and the data sets utilised. Specification appears dependent upon data availability rather than sound theoretical grounding. This may lead to a lack of external validity with model specification dependent upon data availability rather than sound theoretical grounding.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Organizational transformations reliant on successful ICT system developments (continue to) fail to deliver projected benefits even when contemporary governance models are applied rigorously. Modifications to traditional program, project and systems development management methods have produced little material improvement to successful transformation as they are unable to routinely address the complexity and uncertainty of dynamic alignment of IS investments and innovation. Complexity theory provides insight into why this phenomenon occurs and is used to develop a conceptualization of complexity in IS-driven organizational transformations. This research-in-progress aims to identify complexity formulations relevant to organizational transformation. Political/power based influences, interrelated business rules, socio-technical innovation, impacts on stakeholders and emergent behaviors are commonly considered as characterizing complexity while the proposed conceptualization accommodates these as connectivity, irreducibility, entropy and/or information gain in hierarchically approximation and scaling, number of states in a finite automata and/or dimension of attractor, and information and/or variety.
Resumo:
The diverse needs of children have been drawing global attention from both academic and practitioner communities. Based on semi-structured interviews with 23 kin caregivers and five school personnel in the Shijiapu Town of Jilin Province, China, this paper presents a needs model for rural school-age children left behind by their migrant parents. This Chinese model is compared to the needs identification mechanism developed by the Australian Research Alliance for Children and youth. The paper outlines the common needs of children in different contexts, and also highlights the needs that are not explicit in the Australian Research Alliance for Children and Youth framework, such as empowerment and agency or perhaps given insufficient weight, such as education. In discussing relationships among different needs, aspects that are missing in the framework it is argued that culture should be more explicitly recognised when defining need.
Resumo:
Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE computing models have two main limitations: 1) insufficient consideration of the factors influencing QoE, and; 2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users’ acceptability and pleasantness in various mobile video usage scenarios. Statistical regression analysis has been used to build the models with a group of influencing factors as independent predictors, including encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery decisions.
Resumo:
Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.
Resumo:
An important aspect of robotic path planning for is ensuring that the vehicle is in the best location to collect the data necessary for the problem at hand. Given that features of interest are dynamic and move with oceanic currents, vehicle speed is an important factor in any planning exercises to ensure vehicles are at the right place at the right time. Here, we examine different Gaussian process models to find a suitable predictive kinematic model that enable the speed of an underactuated, autonomous surface vehicle to be accurately predicted given a set of input environmental parameters.
Resumo:
Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, which has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering is rarely known. Patterns are always thought to be more representative than single terms for representing documents. In this paper, a novel information filtering model, Pattern-based Topic Model(PBTM) , is proposed to represent the text documents not only using the topic distributions at general level but also using semantic pattern representations at detailed specific level, both of which contribute to the accurate document representation and document relevance ranking. Extensive experiments are conducted to evaluate the effectiveness of PBTM by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model achieves outstanding performance.
Resumo:
BACKGROUND Mosquito-borne diseases are climate sensitive and there has been increasing concern over the impact of climate change on future disease risk. This paper projected the potential future risk of Barmah Forest virus (BFV) disease under climate change scenarios in Queensland, Australia. METHODS/PRINCIPAL FINDINGS We obtained data on notified BFV cases, climate (maximum and minimum temperature and rainfall), socio-economic and tidal conditions for current period 2000-2008 for coastal regions in Queensland. Grid-data on future climate projections for 2025, 2050 and 2100 were also obtained. Logistic regression models were built to forecast the otential risk of BFV disease distribution under existing climatic, socio-economic and tidal conditions. The model was applied to estimate the potential geographic distribution of BFV outbreaks under climate change scenarios. The predictive model had good model accuracy, sensitivity and specificity. Maps on potential risk of future BFV disease indicated that disease would vary significantly across coastal regions in Queensland by 2100 due to marked differences in future rainfall and temperature projections. CONCLUSIONS/SIGNIFICANCE We conclude that the results of this study demonstrate that the future risk of BFV disease would vary across coastal regions in Queensland. These results may be helpful for public health decision making towards developing effective risk management strategies for BFV disease control and prevention programs in Queensland.
Resumo:
Caveolae and their proteins, the caveolins, transport macromolecules; compartmentalize signalling molecules; and are involved in various repair processes. There is little information regarding their role in the pathogenesis of significant renal syndromes such as acute renal failure (ARF). In this study, an in vivo rat model of 30 min bilateral renal ischaemia followed by reperfusion times from 4 h to 1 week was used to map the temporal and spatial association between caveolin-1 and tubular epithelial damage (desquamation, apoptosis, necrosis). An in vitro model of ischaemic ARF was also studied, where cultured renal tubular epithelial cells or arterial endothelial cells were subjected to injury initiators modelled on ischaemia-reperfusion (hypoxia, serum deprivation, free radical damage or hypoxia-hyperoxia). Expression of caveolin proteins was investigated using immunohistochemistry, immunoelectron microscopy, and immunoblots of whole cell, membrane or cytosol protein extracts. In vivo, healthy kidney had abundant caveolin-1 in vascular endothelial cells and also some expression in membrane surfaces of distal tubular epithelium. In the kidneys of ARF animals, punctate cytoplasmic localization of caveolin-1 was identified, with high intensity expression in injured proximal tubules that were losing basement membrane adhesion or were apoptotic, 24 h to 4 days after ischaemia-reperfusion. Western immunoblots indicated a marked increase in caveolin-1 expression in the cortex where some proximal tubular injury was located. In vitro, the main treatment-induced change in both cell types was translocation of caveolin-1 from the original plasma membrane site into membrane-associated sites in the cytoplasm. Overall, expression levels did not alter for whole cell extracts and the protein remained membrane-bound, as indicated by cell fractionation analyses. Caveolin-1 was also found to localize intensely within apoptotic cells. The results are indicative of a role for caveolin-1 in ARF-induced renal injury. Whether it functions for cell repair or death remains to be elucidated.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Social networking sites (SNSs), with their large numbers of users and large information base, seem to be perfect breeding grounds for exploiting the vulnerabilities of people, the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as “social engineering.” While technology-based security has been addressed by research and may be well understood, social engineering is more challenging to understand and manage, especially in new environments such as SNSs, owing to some factors of SNSs that reduce the ability of users to detect the attack and increase the ability of attackers to launch it. This work will contribute to the knowledge of social engineering by presenting the first two conceptual models of social engineering attacks in SNSs. Phase-based and source-based models are presented, along with an intensive and comprehensive overview of different aspects of social engineering threats in SNSs.
Resumo:
We describe recent biologically-inspired mapping research incorporating brain-based multi-sensor fusion and calibration processes and a new multi-scale, homogeneous mapping framework. We also review the interdisciplinary approach to the development of the RatSLAM robot mapping and navigation system over the past decade and discuss the insights gained from combining pragmatic modelling of biological processes with attempts to close the loop back to biology. Our aim is to encourage the pursuit of truly interdisciplinary approaches to robotics research by providing successful case studies.