895 resultados para Non-commutative Landau problem
Resumo:
Cognitive load theory was used to generate a series of three experiments to investigate the effects of various worked example formats on learning orthographic projection. Experiments 1 and 2 investigated the benefits of presenting problems, conventional worked examples incorporating the final 2-D and 3-D representations only, and modified worked examples with several intermediate stages of rotation between the 2-D and 3-D representations. Modified worked examples proved superior to conventional worked examples without intermediate stages while conventional worked examples were, in turn, superior to problems. Experiment 3 investigated the consequences of varying the number and location of intermediate stages in the rotation trajectory and found three stages to be superior to one. A single intermediate stage was superior when nearer the 2-D than the 3-D end of the trajectory. It was concluded that (a) orthographic projection is learned best using worked examples with several intermediate stages and that (b) a linear relation between angle of rotation and problem difficulty did not hold for orthographic projection material. Cognitive load theory could be used to suggest the ideal location of the intermediate stages.
Resumo:
This paper develops a general theory of validation gating for non-linear non-Gaussian mod- els. Validation gates are used in target tracking to cull very unlikely measurement-to-track associa- tions, before remaining association ambiguities are handled by a more comprehensive (and expensive) data association scheme. The essential property of a gate is to accept a high percentage of correct associ- ations, thus maximising track accuracy, but provide a su±ciently tight bound to minimise the number of ambiguous associations. For linear Gaussian systems, the ellipsoidal vali- dation gate is standard, and possesses the statistical property whereby a given threshold will accept a cer- tain percentage of true associations. This property does not hold for non-linear non-Gaussian models. As a system departs from linear-Gaussian, the ellip- soid gate tends to reject a higher than expected pro- portion of correct associations and permit an excess of false ones. In this paper, the concept of the ellip- soidal gate is extended to permit correct statistics for the non-linear non-Gaussian case. The new gate is demonstrated by a bearing-only tracking example.
Resumo:
This article examines the problem of patent ambush in standard setting, where patent owners are sometimes able to capture industry standards in order to secure monopoly power and windfall profits. Because standardisation generally introduces high switching costs, patent ambush can impose significant costs on downstream manufacturers and consumers and drastically reduce the efficiency gains of standardisation.This article considers how Australian competition law is likely to apply to patent ambush both in the development of a standard (through misrepresenting the existence of an essential patent) and after a standard is implemented (through refusing to license an essential patented technology either at all or on reasonable and non-discriminatory (RAND) terms). This article suggests that non-disclosure of patent interests is unlikely to restrained by Part IV of the Trade Practices Act (TPA), and refusals to license are only likely to be restrained if the refusal involves leveraging or exclusive dealing. By contrast, Standard Setting Organisations (SSOs) which seek to limit this behaviour through private ordering may face considerable scrutiny under the new cartel provisions of the TPA. This article concludes that SSOs may be best advised to implement administrative measures to prevent patent hold-up, such as reviewing which patents are essential for the implementation of a standard, asking patent holders to make their licence conditions public to promote transparency, and establishing forums where patent licensees can complain about licence terms that they consider to be unreasonable or discriminatory. Additionally, the ACCC may play a role in authorising SSO policies that could otherwise breach the new cartel provisions, but which have the practical effect of promoting competition in the standards setting environment.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
The potential restriction to effective dispersal and gene flow caused by habitat fragmentation can apply to multiple levels of evolutionary scale; from the fragmentation of ancient supercontinents driving diversification and speciation on disjunct landmasses, to the isolation of proximate populations as a result of their inability to cross intervening unsuitable habitat. Investigating the role of habitat fragmentation in driving diversity within and among taxa can thus include inferences of phylogenetic relationships among taxa, assessments of intraspecific phylogeographic structure and analyses of gene flow among neighbouring populations. The proposed Gondwanan clade within the chironomid (non-biting midge) subfamily Orthocladiinae (Diptera: Chironomidae) represents a model system for investigating the role that population fragmentation and isolation has played at different evolutionary scales. A pilot study by Krosch et al (2009) indentified several highly divergent lineages restricted to ancient rainforest refugia and limited gene flow among proximate sites within a refuge for one member of this clade, Echinocladius martini Cranston. This study provided a framework for investigating the evolutionary history of this taxon and its relatives more thoroughly. Populations of E. martini were sampled in the Paluma bioregion of northeast Queensland to investigate patterns of fine-scale within- and among-stream dispersal and gene flow within a refuge more rigorously. Data was incorporated from Krosch et al (2009) and additional sites were sampled up- and downstream of the original sites. Analyses of genetic structure revealed strong natal site fidelity and high genetic structure among geographically proximate streams. Little evidence was found for regular headwater exchange among upstream sites, but there was distinct evidence for rare adult flight among sites on separate stream reaches. Overall, however, the distribution of shared haplotypes implied that both larval and adult dispersal was largely limited to the natal stream channel. Patterns of regional phylogeographic structure were examined in two related austral orthoclad taxa – Naonella forsythi Boothroyd from New Zealand and Ferringtonia patagonica Sæther and Andersen from southern South America – to provide a comparison with patterns revealed in their close relative E. martini. Both taxa inhabit tectonically active areas of the southern hemisphere that have also experienced several glaciation events throughout the Plio-Pleistocene that are thought to have affected population structure dramatically in many taxa. Four highly divergent lineages estimated to have diverged since the late Miocene were revealed in each taxon, mirroring patterns in E. martini; however, there was no evidence for local geographical endemism, implying substantial range expansion post-diversification. The differences in pattern evident among the three related taxa were suggested to have been influenced by variation in the responses of closed forest habitat to climatic fluctuations during interglacial periods across the three landmasses. Phylogeographic structure in E. martini was resolved at a continental scale by expanding upon the sampling design of Krosch et al (2009) to encompass populations in southeast Queensland, New South Wales and Victoria. Patterns of phylogeographic structure were consistent with expectations and several previously unrecognised lineages were revealed from central- and southern Australia that were geographically endemic to closed forest refugia. Estimated divergence times were congruent with the timing of Plio-Pleistocene rainforest contractions across the east coast of Australia. This suggested that dispersal and gene flow of E. martini among isolated refugia was highly restricted and that this taxon was susceptible to the impacts of habitat change. Broader phylogenetic relationships among taxa considered to be members of this Gondwanan orthoclad group were resolved in order to test expected patterns of evolutionary affinities across the austral continents. The inferred phylogeny and estimated divergence times did not accord with expected patterns based on the geological sequence of break-up of the Gondwanan supercontinent and implied instead several transoceanic dispersal events post-vicariance. Difficulties in appropriate taxonomic sampling and accurate calibration of molecular phylogenies notwithstanding, the sampling regime implemented in the current study has been the most intensive yet performed for austral members of the Orthocladiinae and unsurprisingly has revealed both novel taxa and phylogenetic relationships within and among described genera. Several novel associations between life stages are made here for both described and previously unknown taxa. Investigating evolutionary relationships within and among members of this clade of proposed Gondwanan orthoclad taxa has demonstrated that a complex interaction between historical population fragmentation and dispersal at several levels of evolutionary scale has been important in driving diversification in this group. While interruptions to migration, colonisation and gene flow driven by population fragmentation have clearly contributed to the development and maintenance of much of the diversity present in this group, long-distance dispersal has also played a role in influencing diversification of continental biotas and facilitating gene flow among disjunct populations.
Resumo:
With the current curriculum focus on correlating classroom problem solving lessons to real-world contexts, are LEGO robotics an effective problem solving tool? This present study was designed to investigate this question and to ascertain what problem solving strategies primary students engaged with when working with LEGO robotics and whether the students were able to effectively relate their problem solving strategies to real-world contexts. The qualitative study involved 23 Grade 6 students participating in robotics activities. The study included data collected from researcher observations of student problem solving discussions, collected software programs, and data from a student completed questionnaire. Results from the study indicated that the robotic activities assisted students to reflect on the problem-solving decisions they made. The study also highlighted that the students were able to relate their problem solving strategies to real-world contexts. The study demonstrated that while LEGO robotics can be considered useful problem solving tools in the classroom, careful teacher scaffolding needs to be implemented in regards to correlating LEGO with authentic problem solving. Further research in regards to how teachers can best embed real-world contexts into effective robotics lessons is recommended.
Resumo:
This paper reviews the current status of the application of optical non-destructive methods, particularly infrared (IR) and near infrared (NIR), in the evaluation of the physiological integrity of articular cartilage. It is concluded that a significant amount of work is still required in order to achieve specificity and clinical applicability of these methods in the assessment and treatment of dysfunctional articular joints.
Resumo:
This paper reports on the development of a tool that generates randomised, non-multiple choice assessment within the BlackBoard Learning Management System interface. An accepted weakness of multiple-choice assessment is that it cannot elicit learning outcomes from upper levels of Biggs’ SOLO taxonomy. However, written assessment items require extensive resources for marking, and are susceptible to copying as well as marking inconsistencies for large classes. This project developed an assessment tool which is valid, reliable and sustainable and that addresses the issues identified above. The tool provides each student with an assignment assessing the same learning outcomes, but containing different questions, with responses in the form of words or numbers. Practice questions are available, enabling students to obtain feedback on their approach before submitting their assignment. Thus, the tool incorporates automatic marking (essential for large classes), randomised tasks to each student (reducing copying), the capacity to give credit for working (feedback on the application of theory), and the capacity to target higher order learning outcomes by requiring students to derive their answers rather than choosing them. Results and feedback from students are presented, along with technical implementation details.