987 resultados para funding models
Resumo:
The use of hedonic models to estimate the effects of various factors on house prices is well established. This paper examines a number of international hedonic house price models that seek to quantify the effect of infrastructure charges on new house prices. This work is an important factor in the housing affordability debate, with many governments in high growth areas having user-pays infrastructure charging policies operating in tandem with housing affordability objectives, with no empirical evidence on the impact of one on the other. This research finds there is little consistency between existing models and the data sets utilised. Specification appears dependent upon data availability rather than sound theoretical grounding. This may lead to a lack of external validity with model specification dependent upon data availability rather than sound theoretical grounding.
Resumo:
Early career engineering academics are encouraged to join and contribute to established research groups at the leading edge of their discipline. This is often facilitated by various staff development and support programs. Given that academics are often appointed primarily on the basis of their research skills and outputs, such an approach is justified and is likely to result in advancing the individual academic’s career. It also enhances their capacity to attract competitive research funding, while contributing to the overall research performance of their institution, with further potential for an increased share of government funding. In contrast, there is much less clarity of direction or availability of support mechanisms for those academics in their role as teachers. Following a general induction to teaching and learning at their institution, they would commonly think about preparing some lecture materials, whether for delivery in a face-to-face or on-line modality. Typically they would look for new references and textbooks to act as a guide for preparing the content. They would probably find out how the course has been taught before, and what laboratory facilities and experiments have been used. In all of these and other related tasks, the majority of newly appointed academics are guided strongly by their own experiences as students, rather than any firm knowledge of pedagogical principles. At a time of increased demands on academics’ time, and high expectations of performance and productivity in both research and teaching, it is essential to examine possible actions to support academics in enhancing their teaching performance in effective and efficient ways. Many resources have been produced over the years in engineering schools around the world, with very high intellectual and monetary costs. In Australia, the last few years have seen a surge in the number of ALTC/OLT projects and fellowships addressing a range of engineering education issues and providing many resources. There are concerns however regarding the extent to which these resources are being effectively utilised. Why are academics still re-inventing the wheel and creating their own version of teaching resources and pedagogical practice? Why do they spend so much of their precious time in such an inefficient way? A symposium examining the above issues was conducted at the AAEE2012 conference, and some pointers to possible responses to the above questions were obtained. These are explored in this paper and supplemented by the responses to a survey of a group of engineering education leaders on some of the aspects of these research questions. The outcomes of the workshop and survey results have been analysed in view of the literature and the ALTC/OLT sponsored learning and teaching projects and resources. Other factors are discussed, including how such resources can be found, how their quality might be evaluated, and how assessment may be appropriately incorporated, again using readily available resources. This study found a strong resonance between resources reuse with work on technology acceptance (Davis, 1989), suggesting that technology adoption models could be used to encourage resource sharing. Efficient use of outstanding learning materials is an enabling approach. The paper provides some insights on the factors affecting the re-use of available resources, and makes some recommendations and suggestions on how the issue of resources re-use might be incorporated in the process of applying and completing engineering education projects.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
The diverse needs of children have been drawing global attention from both academic and practitioner communities. Based on semi-structured interviews with 23 kin caregivers and five school personnel in the Shijiapu Town of Jilin Province, China, this paper presents a needs model for rural school-age children left behind by their migrant parents. This Chinese model is compared to the needs identification mechanism developed by the Australian Research Alliance for Children and youth. The paper outlines the common needs of children in different contexts, and also highlights the needs that are not explicit in the Australian Research Alliance for Children and Youth framework, such as empowerment and agency or perhaps given insufficient weight, such as education. In discussing relationships among different needs, aspects that are missing in the framework it is argued that culture should be more explicitly recognised when defining need.
Resumo:
Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE computing models have two main limitations: 1) insufficient consideration of the factors influencing QoE, and; 2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users’ acceptability and pleasantness in various mobile video usage scenarios. Statistical regression analysis has been used to build the models with a group of influencing factors as independent predictors, including encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery decisions.
Resumo:
Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.
Resumo:
An important aspect of robotic path planning for is ensuring that the vehicle is in the best location to collect the data necessary for the problem at hand. Given that features of interest are dynamic and move with oceanic currents, vehicle speed is an important factor in any planning exercises to ensure vehicles are at the right place at the right time. Here, we examine different Gaussian process models to find a suitable predictive kinematic model that enable the speed of an underactuated, autonomous surface vehicle to be accurately predicted given a set of input environmental parameters.
Resumo:
Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, which has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering is rarely known. Patterns are always thought to be more representative than single terms for representing documents. In this paper, a novel information filtering model, Pattern-based Topic Model(PBTM) , is proposed to represent the text documents not only using the topic distributions at general level but also using semantic pattern representations at detailed specific level, both of which contribute to the accurate document representation and document relevance ranking. Extensive experiments are conducted to evaluate the effectiveness of PBTM by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model achieves outstanding performance.
Resumo:
Caveolae and their proteins, the caveolins, transport macromolecules; compartmentalize signalling molecules; and are involved in various repair processes. There is little information regarding their role in the pathogenesis of significant renal syndromes such as acute renal failure (ARF). In this study, an in vivo rat model of 30 min bilateral renal ischaemia followed by reperfusion times from 4 h to 1 week was used to map the temporal and spatial association between caveolin-1 and tubular epithelial damage (desquamation, apoptosis, necrosis). An in vitro model of ischaemic ARF was also studied, where cultured renal tubular epithelial cells or arterial endothelial cells were subjected to injury initiators modelled on ischaemia-reperfusion (hypoxia, serum deprivation, free radical damage or hypoxia-hyperoxia). Expression of caveolin proteins was investigated using immunohistochemistry, immunoelectron microscopy, and immunoblots of whole cell, membrane or cytosol protein extracts. In vivo, healthy kidney had abundant caveolin-1 in vascular endothelial cells and also some expression in membrane surfaces of distal tubular epithelium. In the kidneys of ARF animals, punctate cytoplasmic localization of caveolin-1 was identified, with high intensity expression in injured proximal tubules that were losing basement membrane adhesion or were apoptotic, 24 h to 4 days after ischaemia-reperfusion. Western immunoblots indicated a marked increase in caveolin-1 expression in the cortex where some proximal tubular injury was located. In vitro, the main treatment-induced change in both cell types was translocation of caveolin-1 from the original plasma membrane site into membrane-associated sites in the cytoplasm. Overall, expression levels did not alter for whole cell extracts and the protein remained membrane-bound, as indicated by cell fractionation analyses. Caveolin-1 was also found to localize intensely within apoptotic cells. The results are indicative of a role for caveolin-1 in ARF-induced renal injury. Whether it functions for cell repair or death remains to be elucidated.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Social networking sites (SNSs), with their large numbers of users and large information base, seem to be perfect breeding grounds for exploiting the vulnerabilities of people, the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as “social engineering.” While technology-based security has been addressed by research and may be well understood, social engineering is more challenging to understand and manage, especially in new environments such as SNSs, owing to some factors of SNSs that reduce the ability of users to detect the attack and increase the ability of attackers to launch it. This work will contribute to the knowledge of social engineering by presenting the first two conceptual models of social engineering attacks in SNSs. Phase-based and source-based models are presented, along with an intensive and comprehensive overview of different aspects of social engineering threats in SNSs.
Resumo:
We describe recent biologically-inspired mapping research incorporating brain-based multi-sensor fusion and calibration processes and a new multi-scale, homogeneous mapping framework. We also review the interdisciplinary approach to the development of the RatSLAM robot mapping and navigation system over the past decade and discuss the insights gained from combining pragmatic modelling of biological processes with attempts to close the loop back to biology. Our aim is to encourage the pursuit of truly interdisciplinary approaches to robotics research by providing successful case studies.
Resumo:
In this paper, a model-predictive control (MPC) method is detailed for the control of nonlinear systems with stability considerations. It will be assumed that the plant is described by a local input/output ARX-type model, with the control potentially included in the premise variables, which enables the control of systems that are nonlinear in both the state and control input. Additionally, for the case of set point regulation, a suboptimal controller is derived which has the dual purpose of ensuring stability and enabling finite-iteration termination of the iterative procedure used to solve the nonlinear optimization problem that is used to determine the control signal.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.