957 resultados para Points distribution in high dimensional space
Resumo:
The purpose of this article is to classify the real hypersurfaces in complex space forms of dimension 2 that are both Levi-flat and minimal. The main results are as follows: When the curvature of the complex space form is nonzero, there is a 1-parameter family of such hypersurfaces. Specifically, for each one-parameter subgroup of the isometry group of the complex space form, there is an essentially unique example that is invariant under this one-parameter subgroup. On the other hand, when the curvature of the space form is zero, i.e., when the space form is complex 2-space with its standard flat metric, there is an additional `exceptional' example that has no continuous symmetries but is invariant under a lattice of translations. Up to isometry and homothety, this is the unique example with no continuous symmetries.
Resumo:
A set of observables is described for the topological quantum field theory which describes quantum gravity in three space-time dimensions with positive signature and positive cosmological constant. The simplest examples measure the distances between points, giving spectra and probabilities which have a geometrical interpretation. The observables are related to the evaluation of relativistic spin networks by a Fourier transform.
Resumo:
The aim of the present study was to propose and evaluate the use of factor analysis (FA) in obtaining latent variables (factors) that represent a set of pig traits simultaneously, for use in genome-wide selection (GWS) studies. We used crosses between outbred F2 populations of Brazilian Piau X commercial pigs. Data were obtained on 345 F2 pigs, genotyped for 237 SNPs, with 41 traits. FA allowed us to obtain four biologically interpretable factors: ?weight?, ?fat?, ?loin?, and ?performance?. These factors were used as dependent variables in multiple regression models of genomic selection (Bayes A, Bayes B, RR-BLUP, and Bayesian LASSO). The use of FA is presented as an interesting alternative to select individuals for multiple variables simultaneously in GWS studies; accuracy measurements of the factors were similar to those obtained when the original traits were considered individually. The similarities between the top 10% of individuals selected by the factor, and those selected by the individual traits, were also satisfactory. Moreover, the estimated markers effects for the traits were similar to those found for the relevant factor.
Resumo:
Technology is continually changing, and evolving, throughout the entire construction industry; and particularly in the design process. One of the principal manifestations of this is a move away from team working in a shared work space to team working in a virtual space, using increasingly sophisticated electronic media. Due to the significant operating differences when working in shared and virtual spaces adjustments to generic skills utilised by members is a necessity when moving between the two conditions. This paper reports an aspect of a CRC-CI research project based on research of ‘generic skills’ used by individuals and teams when engaging with high bandwidth information and communication technologies (ICT). It aligns with the project’s other two aspects of collaboration in virtual environments: ‘processes’ and ‘models’. The entire project focuses on the early stages of a project (i.e. design) in which models for the project are being developed and revised. The paper summarises the first stage of the research project which reviews literature to identify factors of virtual teaming which may affect team member skills. It concludes that design team participants require ‘appropriate skills’ to function efficiently and effectively, and that the introduction of high band-width technologies reinforces the need for skills mapping and measurement.
Resumo:
The application of object-based approaches to the problem of extracting vegetation information from images requires accurate delineation of individual tree crowns. This paper presents an automated method for individual tree crown detection and delineation by applying a simplified PCNN model in spectral feature space followed by post-processing using morphological reconstruction. The algorithm was tested on high resolution multi-spectral aerial images and the results are compared with two existing image segmentation algorithms. The results demonstrate that our algorithm outperforms the other two solutions with the average accuracy of 81.8%.
Resumo:
The population Monte Carlo algorithm is an iterative importance sampling scheme for solving static problems. We examine the population Monte Carlo algorithm in a simplified setting, a single step of the general algorithm, and study a fundamental problem that occurs in applying importance sampling to high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of estimate under conditions on the importance function. We demonstrate the exponential growth of the asymptotic variance with the dimension and show that the optimal covariance matrix for the importance function can be estimated in special cases.
Resumo:
Research on analogies in science education has focussed on student interpretation of teacher and textbook analogies, psychological aspects of learning with analogies and structured approaches for teaching with analogies. Few studies have investigated how analogies might be pivotal in students’ growing participation in chemical discourse. To study analogies in this way requires a sociocultural perspective on learning that focuses on ways in which language, signs, symbols and practices mediate participation in chemical discourse. This study reports research findings from a teacher-research study of two analogy-writing activities in a chemistry class. The study began with a theoretical model, Third Space, which informed analyses and interpretation of data. Third Space was operationalized into two sub-constructs called Dialogical Interactions and Hybrid Discourses. The aims of this study were to investigate sociocultural aspects of learning chemistry with analogies in order to identify classroom activities where students generate Dialogical Interactions and Hybrid Discourses, and to refine the operationalization of Third Space. These aims were addressed through three research questions. The research questions were studied through an instrumental case study design. The study was conducted in my Year 11 chemistry class at City State High School for the duration of one Semester. Data were generated through a range of data collection methods and analysed through discourse analysis using the Dialogical Interactions and Hybrid Discourse sub-constructs as coding categories. Results indicated that student interactions differed between analogical activities and mathematical problem-solving activities. Specifically, students drew on discourses other than school chemical discourse to construct analogies and their growing participation in chemical discourse was tracked using the Third Space model as an interpretive lens. Results of this study led to modification of the theoretical model adopted at the beginning of the study to a new model called Merged Discourse. Merged Discourse represents the mutual relationship that formed during analogical activities between the Analog Discourse and the Target Discourse. This model can be used for interpreting and analysing classroom discourse centred on analogical activities from sociocultural perspectives. That is, it can be used to code classroom discourse to reveal students’ growing participation with chemical (or scientific) discourse consistent with sociocultural perspectives on learning.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
Whilst a variety of studies has appeared over the last decade addressing the gap between the potential promised by computers and the reality experienced in the classroom by teachers and students, few have specifically addressed the situation as it pertains to the visual arts classroom. The aim of this study was to explore the reality of the classroom use of computers for three visual arts highschool teachers and determine how computer technology might enrich visual arts teaching and learning. An action research approach was employed to enable the researcher to understand the situation from the teachers' points of view while contributing to their professional practice. The wider social context surrounding this study is characterised by an increase in visual communications brought about by rapid advances in computer technology. The powerful combination of visual imagery and computer technology is illustrated by continuing developments in the print, film and television industries. In particular, the recent growth of interactive multimedia epitomises this combination and is significant to this study as it represents a new form of publishing of great interest to educators and artists alike. In this social context, visual arts education has a significant role to play. By cultivating a critical awareness of the implications of technology use and promoting a creative approach to the application of computer technology within the visual arts, visual arts education is in a position to provide an essential service to students who will leave high school to participate in a visual information age as both consumers and producers.
Resumo:
In computational linguistics, information retrieval and applied cognition, words and concepts are often represented as vectors in high dimensional spaces computed from a corpus of text. These high dimensional spaces are often referred to as Semantic Spaces. We describe a novel and efficient approach to computing these semantic spaces via the use of complex valued vector representations. We report on the practical implementation of the proposed method and some associated experiments. We also briefly discuss how the proposed system relates to previous theoretical work in Information Retrieval and Quantum Mechanics and how the notions of probability, logic and geometry are integrated within a single Hilbert space representation. In this sense the proposed system has more general application and gives rise to a variety of opportunities for future research.
Resumo:
This paper presents a three-dimensional numerical analysis of the electromagnetic forces within a high voltage superconducting Fault Current Limiter (FCL) with a saturated core under short-circuit conditions. The effects of electrodynamics forces in power transformer coils under short-circuit conditions have been reported widely. However, the coil arrangement in an FCL with saturated core differs significantly from existing reactive devices. The boundary element method is employed to perform an electromagnetic force analysis on an FCL. The analysis focuses on axial and radial forces of the AC coil. The results are compared to those of a power transformer and important design considerations are highlighted.
Resumo:
Background Older people have higher rates of hospital admission than the general population and higher rates of readmission due to complications and falls. During hospitalisation, older people experience significant functional decline which impairs their future independence and quality of life. Acute hospital services comprise the largest section of health expenditure in Australia and prevention or delay of disease is known to produce more effective use of services. Current models of discharge planning and follow-up care, however, do not address the need to prevent deconditioning or functional decline. This paper describes the protocol of a randomised controlled trial which aims to evaluate innovative transitional care strategies to reduce unplanned readmissions and improve functional status, independence, and psycho-social well-being of community-based older people at risk of readmission. Methods/Design The study is a randomised controlled trial. Within 72 hours of hospital admission, a sample of older adults fitting the inclusion/exclusion criteria (aged 65 years and over, admitted with a medical diagnosis, able to walk independently for 3 meters, and at least one risk factor for readmission) are randomised into one of four groups: 1) the usual care control group, 2) the exercise and in-home/telephone follow-up intervention group, 3) the exercise only intervention group, or 4) the in-home/telephone follow-up only intervention group. The usual care control group receive usual discharge planning provided by the health service. In addition to usual care, the exercise and in-home/telephone follow-up intervention group receive an intervention consisting of a tailored exercise program, in-home visit and 24 week telephone follow-up by a gerontic nurse. The exercise only and in-home/telephone follow-up only intervention groups, in addition to usual care receive only the exercise or gerontic nurse components of the intervention respectively. Data collection is undertaken at baseline within 72 hours of hospital admission, 4 weeks following hospital discharge, 12 weeks following hospital discharge, and 24 weeks following hospital discharge. Outcome assessors are blinded to group allocation. Primary outcomes are emergency hospital readmissions and health service use, functional status, psychosocial well-being and cost effectiveness. Discussion The acute hospital sector comprises the largest component of health care system expenditure in developed countries, and older adults are the most frequent consumers. There are few trials to demonstrate effective models of transitional care to prevent emergency readmissions, loss of functional ability and independence in this population following an acute hospital admission. This study aims to address that gap and provide information for future health service planning which meets client needs and lowers the use of acute care services.
Resumo:
Walking as an out-of-home mobility activity is recognised for its contribution to healthy and active ageing. The environment can have a powerful effect on the amount of walking activity undertaken by older people, thereby influencing their capacity to maintain their wellbeing and independence. This paper reports the findings from research examining the experiences of neighbourhood walking for 12 older people from six different inner-city high density suburbs, through analysis of data derived from travel diaries, individual time/space activity maps (created via GPS tracking over a seven-day period and GIS technology), and in-depth interviews. Reliance on motor vehicles, the competing interests of pedestrians and cyclists on shared pathways and problems associated with transit systems, public transport, and pedestrian infrastructure emerged as key barriers to older people venturing out of home on foot. GPS and GIS technology provide new opportunities for furthering understanding of the out-of-home mobility of older populations.
Resumo:
Background: Hyperpolarised helium MRI (He3 MRI) is a new technique that enables imaging of the air distribution within the lungs. This allows accurate determination of the ventilation distribution in vivo. The technique has the disadvantages of requiring an expensive helium isotope, complex apparatus and moving the patient to a compatible MRI scanner. Electrical impedance tomography (EIT) a non-invasive bedside technique that allows constant monitoring of lung impedance, which is dependent on changes in air space capacity in the lung. We have used He3MRI measurements of ventilation distribution as the gold standard for assessment of EIT. Methods: Seven rats were ventilated in supine, prone, left and right lateral position with 70% helium/30% oxygen for EIT measurements and pure helium for He3 MRI. The same ventilator and settings were used for both measurements. Image dimensions, geometric centre and global in homogeneity index were calculated. Results: EIT images were smaller and of lower resolution and contained less anatomical detail than those from He3 MRI. However, both methods could measure positional induced changes in lung ventilation, as assessed by the geometric centre. The global in homogeneity index were comparable between the techniques. Conclusion: EIT is a suitable technique for monitoring ventilation distribution and inhomgeneity as assessed by comparison with He3 MRI.