877 resultados para Individual-based modeling
Resumo:
Prevailing video adaptation solutions change the quality of the video uniformly throughout the whole frame in the bitrate adjustment process; while region-of-interest (ROI)-based solutions selectively retains the quality in the areas of the frame where the viewers are more likely to pay more attention to. ROI-based coding can improve perceptual quality and viewer satisfaction while trading off some bandwidth. However, there has been no comprehensive study to measure the bitrate vs. perceptual quality trade-off so far. The paper proposes an ROI detection scheme for videos, which is characterized with low computational complexity and robustness, and measures the bitrate vs. quality trade-off for ROI-based encoding using a state-of-the-art H.264/AVC encoder to justify the viability of this type of encoding method. The results from the subjective quality test reveal that ROI-based encoding achieves a significant perceptual quality improvement over the encoding with uniform quality at the cost of slightly more bits. Based on the bitrate measurements and subjective quality assessments, the bitrate and the perceptual quality estimation models for non-scalable ROI-based video coding (AVC) are developed, which are found to be similar to the models for scalable video coding (SVC).
Resumo:
Virtual prototyping emerges as a new technology to replace existing physical prototypes for product evaluation, which are costly and time consuming to manufacture. Virtualization technology allows engineers and ergonomists to perform virtual builds and different ergonomic analyses on a product. Digital Human Modelling (DHM) software packages such as Siemens Jack, often integrate with CAD systems to provide a virtual environment which allows investigation of operator and product compatibility. Although the integration between DHM and CAD systems allows for the ergonomic analysis of anthropometric design, human musculoskeletal, multi-body modelling software packages such as the AnyBody Modelling System (AMS) are required to support physiologic design. They provide muscular force analysis, estimate human musculoskeletal strain and help address human comfort assessment. However, the independent characteristics of the modelling systems Jack and AMS constrain engineers and ergonomists in conducting a complete ergonomic analysis. AMS is a stand alone programming system without a capability to integrate into CAD environments. Jack is providing CAD integrated human-in-the-loop capability, but without considering musculoskeletal activity. Consequently, engineers and ergonomists need to perform many redundant tasks during product and process design. Besides, the existing biomechanical model in AMS uses a simplified estimation of body proportions, based on a segment mass ratio derived scaling approach. This is insufficient to represent user populations anthropometrically correct in AMS. In addition, sub-models are derived from different sources of morphologic data and are therefore anthropometrically inconsistent. Therefore, an interface between the biomechanical AMS and the virtual human model Jack was developed to integrate a musculoskeletal simulation with Jack posture modeling. This interface provides direct data exchange between the two man-models, based on a consistent data structure and common body model. The study assesses kinematic and biomechanical model characteristics of Jack and AMS, and defines an appropriate biomechanical model. The information content for interfacing the two systems is defined and a protocol is identified. The interface program is developed and implemented through Tcl and Jack-script(Python), and interacts with the AMS console application to operate AMS procedures.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
Background: Access to cardiac services is essential for appropriate implementation of evidence-based therapies to improve outcomes. The Cardiac Accessibility and Remoteness Index for Australia (Cardiac ARIA) aimed to derive an objective, geographic measure reflecting access to cardiac services. Methods: An expert panel defined an evidence-based clinical pathway. Using Geographic Information Systems (GIS), a numeric/alpha index was developed at two points along the continuum of care. The acute category (numeric) measured the time from the emergency call to arrival at an appropriate medical facility via road ambulance. The aftercare category (alpha) measured access to four basic services (family doctor, pharmacy, cardiac rehabilitation, and pathology services) when a patient returned to their community. Results: The numeric index ranged from 1 (access to principle referral center with cardiac catheterization service ≤ 1 hour) to 8 (no ambulance service, > 3 hours to medical facility, air transport required). The alphabetic index ranged from A (all 4 services available within 1 hour drive-time) to E (no services available within 1 hour). 13.9 million (71%) Australians resided within Cardiac ARIA 1A locations (hospital with cardiac catheterization laboratory and all aftercare within 1 hour). Those outside Cardiac 1A were over-represented by people aged over 65 years (32%) and Indigenous people (60%). Conclusion: The Cardiac ARIA index demonstrated substantial inequity in access to cardiac services in Australia. This methodology can be used to inform cardiology health service planning and the methodology could be applied to other common disease states within other regions of the world.
Resumo:
We address the problem of face recognition on video by employing the recently proposed probabilistic linear discrimi-nant analysis (PLDA). The PLDA has been shown to be robust against pose and expression in image-based face recognition. In this research, the method is extended and applied to video where image set to image set matching is performed. We investigate two approaches of computing similarities between image sets using the PLDA: the closest pair approach and the holistic sets approach. To better model face appearances in video, we also propose the heteroscedastic version of the PLDA which learns the within-class covariance of each individual separately. Our experi-ments on the VidTIMIT and Honda datasets show that the combination of the heteroscedastic PLDA and the closest pair approach achieves the best performance.
Resumo:
Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site.
Resumo:
Each year QUT’s Centre of Philanthropy and Nonprofit Studies collects and analyses statistics on the extent of tax-deductible donations claimed by Australians in their individual income tax returns to deductible gift recipients (DGRs). The information presented below is based on the amount and type of tax-deductible donations claimed by Australian taxpayers to deductible gift recipients (DGRs) for the period 1 July 1999 to 30 June 2000. This information has been extracted from the Australian Taxation Office's publication Taxation Statistics 1999-2000 which provides an overview and profile of the income and taxation status of Australian taxpayers using information extracted from their income tax returns for the period 1 July 1999 to 30 June 2000. The 1999/2000 report is the latest report that has been made publicly available...
Resumo:
An increase in the likelihood of navigational collisions in port waters has put focus on the collision avoidance process in port traffic safety. The most widely used on-board collision avoidance system is the automatic radar plotting aid which is a passive warning system that triggers an alert based on the pilot’s pre-defined indicators of distance and time proximities at the closest point of approaches in encounters with nearby vessels. To better help pilot in decision making in close quarter situations, collision risk should be considered as a continuous monotonic function of the proximities and risk perception should be considered probabilistically. This paper derives an ordered probit regression model to study perceived collision risks. To illustrate the procedure, the risks perceived by Singapore port pilots were obtained to calibrate the regression model. The results demonstrate that a framework based on the probabilistic risk assessment model can be used to give a better understanding of collision risk and to define a more appropriate level of evasive actions.
Resumo:
Motor vehicle crashes are a leading cause of death among young people. Fourteen percent of adolescents aged 13-14 report passenger-related injuries within three months. Intervention programs typically focus on young drivers and overlook passengers as potential protective influences. Graduated Driver Licensing restricts passenger numbers, and this study focuses on a complementary school-based intervention to increase passengers’ personal- and peer-protective behavior. The aim of this research was to assess the impact of the curriculum-based injury prevention program, Skills for Preventing Injury in Youth (SPIY), on passenger-related risk-taking and injuries, and intentions to intervene in friends’ risky road behavior. SPIY was implemented in Grade 8 Health classes and evaluated using survey and focus group data from 843 students across 10 Australian secondary schools. Intervention students reported less passenger-related risk-taking six months following the program. Their intention to protect friends from underage driving also increased. The results of this study show that a comprehensive, school-based program targeting individual and social changes can increase adolescent passenger safety.
Resumo:
This paper investigates the effects of limited speech data in the context of speaker verification using a probabilistic linear discriminant analysis (PLDA) approach. Being able to reduce the length of required speech data is important to the development of automatic speaker verification system in real world applications. When sufficient speech is available, previous research has shown that heavy-tailed PLDA (HTPLDA) modeling of speakers in the i-vector space provides state-of-the-art performance, however, the robustness of HTPLDA to the limited speech resources in development, enrolment and verification is an important issue that has not yet been investigated. In this paper, we analyze the speaker verification performance with regards to the duration of utterances used for both speaker evaluation (enrolment and verification) and score normalization and PLDA modeling during development. Two different approaches to total-variability representation are analyzed within the PLDA approach to show improved performance in short-utterance mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development. The results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset suggest that the HTPLDA system can continue to achieve better performance than Gaussian PLDA (GPLDA) as evaluation utterance lengths are decreased. We also highlight the importance of matching durations for score normalization and PLDA modeling to the expected evaluation conditions. Finally, we found that a pooled total-variability approach to PLDA modeling can achieve better performance than the traditional concatenated total-variability approach for short utterances in mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development.
Resumo:
This paper investigates the use of the dimensionality-reduction techniques weighted linear discriminant analysis (WLDA), and weighted median fisher discriminant analysis (WMFD), before probabilistic linear discriminant analysis (PLDA) modeling for the purpose of improving speaker verification performance in the presence of high inter-session variability. Recently it was shown that WLDA techniques can provide improvement over traditional linear discriminant analysis (LDA) for channel compensation in i-vector based speaker verification systems. We show in this paper that the speaker discriminative information that is available in the distance between pair of speakers clustered in the development i-vector space can also be exploited in heavy-tailed PLDA modeling by using the weighted discriminant approaches prior to PLDA modeling. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that WLDA and WMFD projections before PLDA modeling can provide an improved approach when compared to uncompensated PLDA modeling for i-vector based speaker verification systems.
Resumo:
This study proposes a full Bayes (FB) hierarchical modeling approach in traffic crash hotspot identification. The FB approach is able to account for all uncertainties associated with crash risk and various risk factors by estimating a posterior distribution of the site safety on which various ranking criteria could be based. Moreover, by use of hierarchical model specification, FB approach is able to flexibly take into account various heterogeneities of crash occurrence due to spatiotemporal effects on traffic safety. Using Singapore intersection crash data(1997-2006), an empirical evaluate was conducted to compare the proposed FB approach to the state-of-the-art approaches. Results show that the Bayesian hierarchical models with accommodation for site specific effect and serial correlation have better goodness-of-fit than non hierarchical models. Furthermore, all model-based approaches perform significantly better in safety ranking than the naive approach using raw crash count. The FB hierarchical models were found to significantly outperform the standard EB approach in correctly identifying hotspots.
Resumo:
Due to grave potential human, environmental and economical consequences of collisions at sea, collision avoidance has become an important safety concern in navigation. To reduce the risk of collisions at sea, appropriate collision avoidance actions need to be taken in accordance with the regulations, i.e., International Regulations for Preventing Collisions at Sea. However, the regulations only provide qualitative rules and guidelines, and therefore it requires navigators to decide on collision avoidance actions quantitatively by using their judgments which often leads to making errors in navigation. To better help navigators in collision avoidance, this paper develops a comprehensive collision avoidance decision making model for providing whether a collision avoidance action is required, when to take action and what action to be taken. The model is developed based on three types of collision avoidance actions, such as course change only, speed change only, and a combination of both. The model has potential to reduce the chance of making human error in navigation by assisting navigators in decision making on collision avoidance actions.
Resumo:
In an ever changing world the adults of the future will be faced with many challenges. To cope with these challenges it seems apparent that values education will need to become paramount within a child.s education. A considerable number of research studies have indicated that values education is a critical component within education (Lovat & Toomey, 2007b). Building on this research Lovat (2006) claimed that values education was the missing link in quality teaching The concept of quality teaching had risen to the fore within educational research literature in the late 20th century with the claim that it is the teacher who makes the difference in schooling (Hattie, 2004). Thus, if teachers make such a difference to student learning, achievement and well-being, then it must hold true that pre-service teacher education programmes are vital in ensuring the development of quality teachers for our schools. The gap that this current research programme addressed was to link the fields of values education, quality teaching and pre-service teacher education. This research programme aimed to determine the impact of a values-based pedagogy on the development of quality teaching dimensions within pre-service teacher education. The values-based pedagogy that was investigated in this research programme was Philosophy in the Classroom. The research programme adopted a nested case study design based on the constructivist-interpretative paradigm in examining a unit within a pre-service teacher education programme at a Queensland university. The methodology utilised was qualitative where the main source of data was via interviews. In total, 43 pre-service teachers participated in three studies in order to determine if their involvement in a unit where the focus was on introducing pre-service teachers to an explicit values-based pedagogy impacted on their knowledge, skills and confidence in terms of quality teaching dimensions. The research programme was divided into three separate studies in order to address the two research questions: 1. In what ways do pre-service teachers perceive they are being prepared to become quality teachers? 2. Is there a connection between an explicit values-based pedagogy in pre-service teacher education and the development of pre-service teachers. understanding of quality teaching? Study One provided insight into 21 pre-service teachers. understandings of quality teaching. These 21 participants had not engaged in an explicit values-based pedagogy. Study Two involved the interviewing of 22 pre-service teachers at two separate points in time . prior to exposure to a unit that employed a values-explicit pedagogy and post this subject.s lecture content delivery. Study Three reported on and analysed individual case studies of five pre-service teachers who had participated in Study Two Time 1 and Time 2, as well as a third time following their field experience where they had practice in teaching the values explicit pedagogy. The results of the research demonstrate that an explicit values-based pedagogy introduced into a teacher education programme has a positive impact on the development of pre-service teachers. understanding of quality teaching skills and knowledge. The teaching and practice of a values-based pedagogy positively impacted on pre-service teachers with increases of knowledge, skills and confidence demonstrated on the quality teaching dimensions of intellectual quality, a supportive classroom environment, recognition of difference, connectedness and values. These findings were reinforced through the comparison of pre-service teachers who had participated in the explicit values-based pedagogical approach, with a sample of pre-service teachers who had not engaged in this same values-based pedagogical approach. A solid values-based pedagogy and practice can and does enhance pre-service teachers. understanding of quality teaching. These findings surrounding the use of a values-based pedagogy in pre-service teacher education to enhance quality teaching knowledge and skills has contributed theoretically to the field of educational research, as well having practical implications for teacher education institutions and teacher educators.
Resumo:
The study of biologically active peptides is critical to the understanding of physiological pathways, especially those involved in the development of disease. Historically, the measurement of biologically active endogenous peptides has been undertaken by radioimmunoassay, a highly sensitive and robust technique that permits the detection of physiological concentrations in different biofluid and tissue extracts. Over recent years, a range of mass spectrometric approaches have been applied to peptide quantification with limited degrees of success. Neuropeptide Y (NPY), peptide YY (PYY), and pancreatic polypeptide (PP) belong to the NPY family exhibiting regulatory effects on appetite and feeding behavior. The physiological significance of these peptides depends on their molecular forms and in vivo concentrations systemically and at local sites within tissues. In this report, we describe an approach for quantification of individual peptides within mixtures using high-performance liquid chromatography electrospray ionization tandem mass spectrometry analysis of the NPY family peptides. Aspects of quantification including sample preparation, the use of matrix-matched calibration curves, and internal standards will be discussed. This method for the simultaneous determination of NPY, PYY, and PP was accurate and reproducible but lacks the sensitivity required for measurement of their endogenous concentration in plasma. The advantages of mass spectrometric quantification will be discussed alongside the current obstacles and challenges. © 2012 Wiley Periodicals, Inc. Biopolymers (Pept Sci) 98: 357–366, 2012.