642 resultados para Fourier Active Appearance Models
Resumo:
This paper investigates a strategy for guiding school-based active travel intervention. School-based active travel programs address the travel behaviors and perceptions of small target populations (i.e., at individual schools) so they can encourage people to walk or bike. Thus, planners need to know as much as possible about the behaviors and perceptions of their target populations. However, existing strategies for modeling travel behavior and segmenting audiences typically work with larger populations and may not capture the attitudinal diversity of smaller groups. This case study used Q technique to identify salient travel-related attitude types among parents at an elementary school in Denver, Colorado; 161 parents presented their perspectives about school travel by rank-ordering 36 statements from strongly disagree to strongly agree in a normalized distribution, single centered around no opinion. Thirty-nine respondents' cases were selected for case-wise cluster analysis in SPSS according to criteria that made them most likely to walk: proximity to school, grade, and bus service. Analysis revealed five core perspectives that were then correlated with the larger respondent pool: optimistic walkers, fair-weather walkers, drivers of necessity, determined drivers, and fence sitters. Core perspectives are presented—characterized by parents' opinions, personal characteristics, and reported travel behaviors—and recommendations are made for possible intervention approaches. The study concludes that Q technique provides a fine-grained assessment of travel behavior for small populations, which would benefit small-scale behavioral interventions
Resumo:
We propose to use the Tensor Space Modeling (TSM) to represent and analyze the user’s web log data that consists of multiple interests and spans across multiple dimensions. Further we propose to use the decomposition factors of the Tensors for clustering the users based on similarity of search behaviour. Preliminary results show that the proposed method outperforms the traditional Vector Space Model (VSM) based clustering.
Resumo:
Previous research has put forward a number of properties of business process models that have an impact on their understandability. Two such properties are compactness and(block-)structuredness. What has not been sufficiently appreciated at this point is that these desirable properties may be at odds with one another. This paper presents the results of a two-pronged study aimed at exploring the trade-off between compactness and structuredness of process models. The first prong of the study is a comparative analysis of the complexity of a set of unstructured process models from industrial practice and of their corresponding structured versions. The second prong is an experiment wherein a cohort of students was exposed to semantically equivalent unstructured and structured process models. The key finding is that structuredness is not an absolute desideratum vis-a-vis for process model understandability. Instead, subtle trade-offs between structuredness and other model properties are at play.
Resumo:
In early 2011, the Australian Learning and Teaching Council Ltd (ALTC) commissioned a series of Good Practice Reports on completed ALTC projects and fellowships. This report will: • Provide a summative evaluation of the good practices and key outcomes for teaching and learning from completed ALTC projects and fellowships relating to blended learning • Include a literature review of the good practices and key outcomes for teaching and learning from national and international research • Identify areas in which further work or development are appropriate. The literature abounds with definitions; it can be argued that the various definitions incorporate different perspectives, but there is no single, collectively accepted definition. Blended learning courses in higher education can be placed somewhere on a continuum, between fully online and fully face-to-face courses. Consideration must therefore be given to the different definitions for blended learning presented in the literature and by users and stakeholders. The application of this term in these various projects and fellowships is dependent on the particular focus of the team and the conditions and situations under investigation. One of the key challenges for projects wishing to develop good practice in blended learning is the lack of a universally accepted definition. The findings from these projects and fellowships reveal the potential of blended learning programs to improve both student outcomes and levels of satisfaction. It is clear that this environment can help teaching and learning engage students more effectively and allow greater participation than traditional models. Just as there are many definitions, there are many models and frameworks that can be successfully applied to the design and implementation of such courses. Each academic discipline has different learning objectives and in consequence there can’t be only one correct approach. This is illustrated by the diversity of definitions and applications in the ALTC funded projects and fellowships. A review of the literature found no universally accepted guidelines for good practice in higher education. To inform this evaluation and literature review, the Seven Principles for Good Practice in Undergraduate Education, as outlined by Chickering and Gamson (1987), were adopted: 1. encourages contacts between students and faculty 2. develops reciprocity and cooperation among students 3. uses active learning techniques 4. gives prompt feedback 5. emphasises time on task 6. communicates high expectations 7. respects diverse talents and ways of learning. These blended learning projects have produced a wide range of resources that can be used in many and varied settings. These resources include: books, DVDs, online repositories, pedagogical frameworks, teaching modules. In addition there is valuable information contained in the published research data and literature reviews that inform good practice and can assist in the development of courses that can enrich and improve teaching and learning.
Resumo:
Background: Considerable attention is currently being directed towards both active ageing and the revising of standards for disability services within Australia and internationally. Yet, to date, no consideration appears to have been given to ways to promote active ageing among older adults with intellectual disabilities. Methods: Semi-structured interviews were conducted with 16 Australian professional direct-care support staff (service providers) about their perceptions of ageing among older adults with lifelong intellectual disabilities and what active ageing might entail for an individual from this population who is currently under their care, in both the present and future. Data were analysed against the six core World Health Organization active ageing outcomes for people with intellectual disabilities. Results: Service providers appeared to be strongly focused on encouraging active ageing among their clients. However, their perceptions of the individual characteristics, circumstances and experiences of older adults with intellectual disabilities for whom they care suggest that active ageing principles need to be applied to this group in a way that considers both their individual and diverse needs, particularly with respect to them transitioning from day services, employment or voluntary work to reduced activity, and finally to aged care facilities. The appropriateness of this group being placed in nursing homes in old age was also questioned. Conclusion: Direct-care staff of older adults with intellectual disabilities have a vital role to play in encouraging and facilitating active ageing, as well as informing strategies that need to be implemented to ensure appropriate care for this diverse group as they proceed to old age.
Resumo:
Purpose – The purpose of this paper is to jointly assess the impact of regulatory reform for corporate fundraising in Australia (CLERP Act 1999) and the relaxation of ASX admission rules in 1999, on the accuracy of management earnings forecasts in initial public offer (IPO) prospectuses. The relaxation of ASX listing rules permitted a new category of new economy firms (commitments test entities (CTEs))to list without a prior history of profitability, while the CLERP Act (introduced in 2000) was accompanied by tighter disclosure obligations and stronger enforcement action by the corporate regulator (ASIC). Design/methodology/approach – All IPO earnings forecasts in prospectuses lodged between 1998 and 2003 are examined to assess the pre- and post-CLERP Act impact. Based on active ASIC enforcement action in the post-reform period, IPO firms are hypothesised to provide more accurate forecasts, particularly CTE firms, which are less likely to have a reasonable basis for forecasting. Research models are developed to empirically test the impact of the reforms on CTE and non-CTE IPO firms. Findings – The new regulatory environment has had a positive impact on management forecasting behaviour. In the post-CLERP Act period, the accuracy of prospectus forecasts and their revisions significantly improved and, as expected, the results are primarily driven by CTE firms. However, the majority of prospectus forecasts continue to be materially inaccurate. Originality/value – The results highlight the need to control for both the changing nature of listed firms and the level of enforcement action when examining responses to regulatory changes to corporate fundraising activities.
Resumo:
Evaluating the safety of different traffic facilities is a complex and crucial task. Microscopic simulation models have been widely used for traffic management but have been largely neglected in traffic safety studies. Micro simulation to study safety is more ethical and accessible than the traditional safety studies, which only assess historical crash data. However, current microscopic models are unable to mimic unsafe driver behavior, as they are based on presumptions of safe driver behavior. This highlights the need for a critical examination of the current microscopic models to determine which components and parameters have an effect on safety indicator reproduction. The question then arises whether these safety indicators are valid indicators of traffic safety. The safety indicators were therefore selected and tested for straight motorway segments in Brisbane, Australia. This test examined the capability of a micro-simulation model and presents a better understanding of micro-simulation models and how such models, in particular car following models can be enriched to present more accurate safety indicators.
Resumo:
This paper discusses human factors issues of low cost railway level crossings in Australia. Several issues are discussed in this paper including safety at passive level railway crossings, human factors considerations associated with unavailability of a warning device, and a conceptual model for how safety could be compromised at railway level crossings following prolonged or frequent unavailability. The research plans to quantify safety risk to motorists at level crossings using a Human Reliability Assessment (HRA) method, supported by data collected using an advanced driving simulator. This method aims to identify human error within tasks and task units identified as part of the task analysis process. It is anticipated that by modelling driver behaviour the current study will be able to quantify meaningful task variability including temporal parameters, between participants and within participants. The process of complex tasks such as driving through a level crossing is fundamentally context-bound. Therefore this study also aims to quantify those performance-shaping factors that contribute to vehicle train collisions by highlighting changes in the task units and driver physiology. Finally we will also consider a number of variables germane to ensuring external validity of our results. Without this inclusion, such an analysis could seriously underestimate risk.
Resumo:
Non-invasive vibration analysis has been used extensively to monitor the progression of dental implant healing and stabilization. It is now being considered as a method to monitor femoral implants in transfemoral amputees. This paper evaluates two modal analysis excitation methods and investigates their capabilities in detecting changes at the interface between the implant and the bone that occur during osseointegration. Excitation of bone-implant physical models with the electromagnetic shaker provided higher coherence values and a greater number of modes over the same frequency range when compared to the impact hammer. Differences were detected in the natural frequencies and fundamental mode shape of the model when the fit of the implant was altered in the bone. The ability to detect changes in the model dynamic properties demonstrates the potential of modal analysis in this application and warrants further investigation.
Resumo:
With the increasing number of XML documents in varied domains, it has become essential to identify ways of finding interesting information from these documents. Data mining techniques were used to derive this interesting information. Mining on XML documents is impacted by its model due to the semi-structured nature of these documents. Hence, in this chapter we present an overview of the various models of XML documents, how these models were used for mining and some of the issues and challenges in these models. In addition, this chapter also provides some insights into the future models of XML documents for effectively capturing the two important features namely structure and content of XML documents for mining.
Resumo:
Existing recommendation systems often recommend products to users by capturing the item-to-item and user-to-user similarity measures. These types of recommendation systems become inefficient in people-to-people networks for people to people recommendation that require two way relationship. Also, existing recommendation methods use traditional two dimensional models to find inter relationships between alike users and items. It is not efficient enough to model the people-to-people network with two-dimensional models as the latent correlations between the people and their attributes are not utilized. In this paper, we propose a novel tensor decomposition-based recommendation method for recommending people-to-people based on users profiles and their interactions. The people-to-people network data is multi-dimensional data which when modeled using vector based methods tend to result in information loss as they capture either the interactions or the attributes of the users but not both the information. This paper utilizes tensor models that have the ability to correlate and find latent relationships between similar users based on both information, user interactions and user attributes, in order to generate recommendations. Empirical analysis is conducted on a real-life online dating dataset. As demonstrated in results, the use of tensor modeling and decomposition has enabled the identification of latent correlations between people based on their attributes and interactions in the network and quality recommendations have been derived using the 'alike' users concept.
Resumo:
Continuum, partial differential equation models are often used to describe the collective motion of cell populations, with various types of motility represented by the choice of diffusion coefficient, and cell proliferation captured by the source terms. Previously, the choice of diffusion coefficient has been largely arbitrary, with the decision to choose a particular linear or nonlinear form generally based on calibration arguments rather than making any physical connection with the underlying individual-level properties of the cell motility mechanism. In this work we provide a new link between individual-level models, which account for important cell properties such as varying cell shape and volume exclusion, and population-level partial differential equation models. We work in an exclusion process framework, considering aligned, elongated cells that may occupy more than one lattice site, in order to represent populations of agents with different sizes. Three different idealizations of the individual-level mechanism are proposed, and these are connected to three different partial differential equations, each with a different diffusion coefficient; one linear, one nonlinear and degenerate and one nonlinear and nondegenerate. We test the ability of these three models to predict the population level response of a cell spreading problem for both proliferative and nonproliferative cases. We also explore the potential of our models to predict long time travelling wave invasion rates and extend our results to two dimensional spreading and invasion. Our results show that each model can accurately predict density data for nonproliferative systems, but that only one does so for proliferative systems. Hence great care must be taken to predict density data for with varying cell shape.
Resumo:
The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.
Resumo:
In this paper an existing method for indoor Simultaneous Localisation and Mapping (SLAM) is extended to operate in large outdoor environments using an omnidirectional camera as its principal external sensor. The method, RatSLAM, is based upon computational models of the area in the rat brain that maintains the rodent’s idea of its position in the world. The system uses the visual appearance of different locations to build hybrid spatial-topological maps of places it has experienced that facilitate relocalisation and path planning. A large dataset was acquired from a dynamic campus environment and used to verify the system’s ability to construct representations of the world and simultaneously use these representations to maintain localisation.
Resumo:
This paper presents a novel technique for performing SLAM along a continuous trajectory of appearance. Derived from components of FastSLAM and FAB-MAP, the new system dubbed Continuous Appearance-based Trajectory SLAM (CAT-SLAM) augments appearancebased place recognition with particle-filter based ‘pose filtering’ within a probabilistic framework, without calculating global feature geometry or performing 3D map construction. For loop closure detection CAT-SLAM updates in constant time regardless of map size. We evaluate the effectiveness of CAT-SLAM on a 16km outdoor road network and determine its loop closure performance relative to FAB-MAP. CAT-SLAM recognizes 3 times the number of loop closures for the case where no false positives occur, demonstrating its potential use for robust loop closure detection in large environments.