966 resultados para ORDER ACCURACY APPROXIMATIONS
Resumo:
This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.
Resumo:
The middle years of schooling has emerged as an important focus in Australian education. Student disengagement and alienation, the negative effects of non-completion of the senior years of schooling and underachievement have raised concerns about the quality of education during the middle years. For many schools, reshaping the middle years has involved incorporating Information and Communication Technologies (ICT) to motivate students. However, simultaneously there is a need to ensure that programs are academically rigorous. There is little doubt that there are potential benefits to integrating ICT into programs for middle years’ students. However, little is known about how middle years’ teachers perceive higher order thinking, which is a component of academic rigour. This paper investigates the question of What are teachers’ perceptions of higher order thinking in an ICT environment? The study is underpinned by socio-cultural theory which is based on the belief that learning occurs through social interaction and that individuals are shaped by the social and cultural tools and instruments they engage with. This investigation used a collective case study design. Two methods were used for data collection. These methods are semi-structured interviews with individual teachers and a class and a focus group discussion with teachers. Findings indicate that teachers hold various perceptions of higher order thinking that lead to productive approaches to integrating ICT in middle years’ classrooms. The paper highlights that there may be a continuum of perceptions of higher order thinking with ICT. This continuum may inform professional developers who are guiding and supporting teachers to integrate ICT into middle years’ classrooms.
Resumo:
Traffic congestion is an increasing problem with high costs in financial, social and personal terms. These costs include psychological and physiological stress, aggressivity and fatigue caused by lengthy delays, and increased likelihood of road crashes. Reliable and accurate traffic information is essential for the development of traffic control and management strategies. Traffic information is mostly gathered from in-road vehicle detectors such as induction loops. Traffic Message Chanel (TMC) service is popular service which wirelessly send traffic information to drivers. Traffic probes have been used in many cities to increase traffic information accuracy. A simulation to estimate the number of probe vehicles required to increase the accuracy of traffic information in Brisbane is proposed. A meso level traffic simulator has been developed to facilitate the identification of the optimal number of probe vehicles required to achieve an acceptable level of traffic reporting accuracy. Our approach to determine the optimal number of probe vehicles required to meet quality of service requirements, is to simulate runs with varying numbers of traffic probes. The simulated traffic represents Brisbane’s typical morning traffic. The road maps used in simulation are Brisbane’s TMC maps complete with speed limits and traffic lights. Experimental results show that that the optimal number of probe vehicles required for providing a useful supplement to TMC (induction loop) data lies between 0.5% and 2.5% of vehicles on the road. With less probes than 0.25%, little additional information is provided, while for more probes than 5%, there is only a negligible affect on accuracy for increasingly many probes on the road. Our findings are consistent with on-going research work on traffic probes, and show the effectiveness of using probe vehicles to supplement induction loops for accurate and timely traffic information.
Resumo:
This study investigates the everyday practices of young children acting in their social worlds within the context of the school playground. It employs an ethnographic ethnomethodological approach using conversation analysis. In the context of child participation rights advanced by the United Nations Convention on the Rights of the Child (UNCRC) and childhood studies, the study considers children’s social worlds and their participation agendas. The participants of the study were a group of young children in a preparatory year setting in a Queensland school. These children, aged 4 to 6 years, were videorecorded as they participated in their day-to-day activities in the classroom and in the playground. Data collection took place over a period of three months, with a total of 26 hours of video data. Episodes of the video-recordings were shown to small groups of children and to the teacher to stimulate conversations about what they saw on the video. The conversations were audio-recorded. This method acknowledged the child’s standpoint and positioned children as active participants in accounting for their relationships with others. These accounts are discussed as interactionally built comments on past joint experiences and provided a starting place for analysis of the video-recorded interaction. Four data chapters are presented in this thesis. Each data chapter investigates a different topic of interaction. The topics include how children use “telling” as a tactical tool in the management of interactional trouble, how children use their “ideas” as possessables to gain ownership of a game and the interactional matters that follow, how children account for interactional matters and bid for ownership of “whose idea” for the game and finally, how a small group of girls orientated to a particular code of conduct when accounting for their actions in a pretend game of “school”. Four key themes emerged from the analysis. The first theme addresses two arenas of action operating in the social world of children, pretend and real: the “pretend”, as a player in a pretend game, and the “real”, as a classroom member. These two arenas are intertwined. Through inferences to explicit and implicit “codes of conduct”, moral obligations are invoked as children attempt to socially exclude one another, build alliances and enforce their own social positions. The second theme is the notion of shared history. This theme addresses the history that the children reconstructed, and acts as a thread that weaves through their interactions, with implications for present and future relationships. The third theme is around ownership. In a shared context, such as the playground, ownership is a highly contested issue. Children draw on resources such as rules, their ideas as possessables, and codes of behaviour as devices to construct particular social and moral orders around owners of the game. These themes have consequences for children’s participation in a social group. The fourth theme, methodological in nature, shows how the researcher was viewed as an outsider and novice and was used as a resource by the children. This theme is used to inform adult-child relationships. The study was situated within an interest in participation rights for children and perspectives of children as competent beings. Asking children to account for their participation in playground activities situates children as analysers of their own social worlds and offers adults further information for understanding how children themselves construct their social interactions. While reporting on the experiences of one group of children, this study opens up theoretical questions about children’s social orders and these influences on their everyday practices. This thesis uncovers how children both participate in, and shape, their everyday social worlds through talk and interaction. It investigates the consequences that taken-for-granted activities of “playing the game” have for their social participation in the wider culture of the classroom. Consideration of this significance may assist adults to better understand and appreciate the social worlds of young children in the school playground.
Resumo:
XML document clustering is essential for many document handling applications such as information storage, retrieval, integration and transformation. An XML clustering algorithm should process both the structural and the content information of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. This paper introduces a novel approach that first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. The proposed method reduces the high dimensionality of input data by using only the structure-constrained content. The empirical analysis reveals that the proposed method can effectively cluster even very large XML datasets and outperform other existing methods.
Resumo:
In this paper, we consider the variable-order nonlinear fractional diffusion equation View the MathML source where xRα(x,t) is a generalized Riesz fractional derivative of variable order View the MathML source and the nonlinear reaction term f(u,x,t) satisfies the Lipschitz condition |f(u1,x,t)-f(u2,x,t)|less-than-or-equals, slantL|u1-u2|. A new explicit finite-difference approximation is introduced. The convergence and stability of this approximation are proved. Finally, some numerical examples are provided to show that this method is computationally efficient. The proposed method and techniques are applicable to other variable-order nonlinear fractional differential equations.
Resumo:
A method of improving the security of biometric templates which satisfies desirable properties such as (a) irreversibility of the template, (b) revocability and assignment of a new template to the same biometric input, (c) matching in the secure transformed domain is presented. It makes use of an iterative procedure based on the bispectrum that serves as an irreversible transformation for biometric features because signal phase is discarded each iteration. Unlike the usual hash function, this transformation preserves closeness in the transformed domain for similar biometric inputs. A number of such templates can be generated from the same input. These properties are illustrated using synthetic data and applied to images from the FRGC 3D database with Gabor features. Verification can be successfully performed using these secure templates with an EER of 5.85%
Resumo:
Aims: This study investigated the effect of simulated visual impairment on the speed and accuracy of performance on a series of commonly used cognitive tests. ----- Methods: Cognitive performance was assessed for 30 young, visually normal subjects (M=22.0yrs ± 3.1 yrs) using the Digit Symbol Substitution Test (DSST), Trail Making Test (TMT) A and B and the Stroop Colour Word Test under three visual conditions: normal vision and two levels of visually degrading filters (VistechTM) administered in a random order. Distance visual acuity and contrast sensitivity were also assessed for each filter condition. ----- Results: The visual filters, which degraded contrast sensitivity to a greater extent than visual acuity, significantly increased the time to complete (p<0.05), but not the number of errors made, on the DSST and the TMT A and B and affected only some components of the Stroop test.----- Conclusions: Reduced contrast sensitivity had a marked effect on the speed but not the accuracy of performance on commonly used cognitive tests, even in young individuals; the implications of these findings are discussed.
Resumo:
Artificial neural networks (ANN) have demonstrated good predictive performance in a wide range of applications. They are, however, not considered sufficient for knowledge representation because of their inability to represent the reasoning process succinctly. This paper proposes a novel methodology Gyan that represents the knowledge of a trained network in the form of restricted first-order predicate rules. The empirical results demonstrate that an equivalent symbolic interpretation in the form of rules with predicates, terms and variables can be derived describing the overall behaviour of the trained ANN with improved comprehensibility while maintaining the accuracy and fidelity of the propositional rules.
Resumo:
Aberrations affect image quality of the eye away from the line of sight as well as along it. High amounts of lower order aberrations are found in the peripheral visual field and higher order aberrations change away from the centre of the visual field. Peripheral resolution is poorer than that in central vision, but peripheral vision is important for movement and detection tasks (for example driving) which are adversely affected by poor peripheral image quality. Any physiological process or intervention that affects axial image quality will affect peripheral image quality as well. The aim of this study was to investigate the effects of accommodation, myopia, age, and refractive interventions of orthokeratology, laser in situ keratomileusis and intraocular lens implantation on the peripheral aberrations of the eye. This is the first systematic investigation of peripheral aberrations in a variety of subject groups. Peripheral aberrations can be measured either by rotating a measuring instrument relative to the eye or rotating the eye relative to the instrument. I used the latter as it is much easier to do. To rule out effects of eye rotation on peripheral aberrations, I investigated the effects of eye rotation on axial and peripheral cycloplegic refraction using an open field autorefractor. For axial refraction, the subjects fixated at a target straight ahead, while their heads were rotated by ±30º with a compensatory eye rotation to view the target. For peripheral refraction, the subjects rotated their eyes to fixate on targets out to ±34° along the horizontal visual field, followed by measurements in which they rotated their heads such that the eyes stayed in the primary position relative to the head while fixating at the peripheral targets. Oblique viewing did not affect axial or peripheral refraction. Therefore it is not critical, within the range of viewing angles studied, if axial and peripheral refractions are measured with rotation of the eye relative to the instrument or rotation of the instrument relative to the eye. Peripheral aberrations were measured using a commercial Hartmann-Shack aberrometer. A number of hardware and software changes were made. The 1.4 mm range limiting aperture was replaced by a larger aperture (2.5 mm) to ensure all the light from peripheral parts of the pupil reached the instrument detector even when aberrations were high such as those occur in peripheral vision. The power of the super luminescent diode source was increased to improve detection of spots passing through the peripheral pupil. A beam splitter was placed between the subjects and the aberrometer, through which they viewed an array of targets on a wall or projected on a screen in a 6 row x 7 column matrix of points covering a visual field of 42 x 32. In peripheral vision, the pupil of the eye appears elliptical rather than circular; data were analysed off-line using custom software to determine peripheral aberrations. All analyses in the study were conducted for 5.0 mm pupils. Influence of accommodation on peripheral aberrations was investigated in young emmetropic subjects by presenting fixation targets at 25 cm and 3 m (4.0 D and 0.3 D accommodative demands, respectively). Increase in accommodation did not affect the patterns of any aberrations across the field, but there was overall negative shift in spherical aberration across the visual field of 0.10 ± 0.01m. Subsequent studies were conducted with the targets at a 1.2 m distance. Young emmetropes, young myopes and older emmetropes exhibited similar patterns of astigmatism and coma across the visual field. However, the rate of change of coma across the field was higher in young myopes than young emmetropes and was highest in older emmetropes amongst the three groups. Spherical aberration showed an overall decrease in myopes and increase in older emmetropes across the field, as compared to young emmetropes. Orthokeratology, spherical IOL implantation and LASIK altered peripheral higher order aberrations considerably, especially spherical aberration. Spherical IOL implantation resulted in an overall increase in spherical aberration across the field. Orthokeratology and LASIK reversed the direction of change in coma across the field. Orthokeratology corrected peripheral relative hypermetropia through correcting myopia in the central visual field. Theoretical ray tracing demonstrated that changes in aberrations due to orthokeratology and LASIK can be explained by the induced changes in radius of curvature and asphericity of the cornea. This investigation has shown that peripheral aberrations can be measured with reasonable accuracy with eye rotation relative to the instrument. Peripheral aberrations are affected by accommodation, myopia, age, orthokeratology, spherical intraocular lens implantation and laser in situ keratomileusis. These factors affect the magnitudes and patterns of most aberrations considerably (especially coma and spherical aberration) across the studied visual field. The changes in aberrations across the field may influence peripheral detection and motion perception. However, further research is required to investigate how the changes in aberrations influence peripheral detection and motion perception and consequently peripheral vision task performance.
Resumo:
The increasing diversity of the Internet has created a vast number of multilingual resources on the Web. A huge number of these documents are written in various languages other than English. Consequently, the demand for searching in non-English languages is growing exponentially. It is desirable that a search engine can search for information over collections of documents in other languages. This research investigates the techniques for developing high-quality Chinese information retrieval systems. A distinctive feature of Chinese text is that a Chinese document is a sequence of Chinese characters with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose two approaches to deal with the problems. In the first approach, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach. In the second approach, we propose a novel query expansion method which applies text mining techniques in order to find the most relevant words to extend the query. Unlike most existing query expansion methods, which generally select the highly frequent indexing terms from the retrieved documents to expand the query. In our approach, we utilize text mining techniques to find patterns from the retrieved documents that highly correlate with the query term and then use the relevant words in the patterns to expand the original query. This research project develops and implements a Chinese information retrieval system for evaluating the proposed approaches. There are two stages in the experiments. The first stage is to investigate if high accuracy segmentation can make an improvement to Chinese information retrieval. In the second stage, a text mining based query expansion approach is implemented and a further experiment has been done to compare its performance with the standard Rocchio approach with the proposed text mining based query expansion method. The NTCIR5 Chinese collections are used in the experiments. The experiment results show that by incorporating the text mining based query expansion with the hybrid model, significant improvement has been achieved in both precision and recall assessments.
Resumo:
Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart, by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computer-based intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are nonlinear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of nonlinear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and seven classes of arrhythmia. We present some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. We also extracted features from the HOS and performed an analysis of variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test.
Resumo:
In this paper, we define and present a comprehensive classification of user intent for Web searching. The classification consists of three hierarchical levels of informational, navigational, and transactional intent. After deriving attributes of each, we then developed a software application that automatically classified queries using a Web search engine log of over a million and a half queries submitted by several hundred thousand users. Our findings show that more than 80% of Web queries are informational in nature, with about 10% each being navigational and transactional. In order to validate the accuracy of our algorithm, we manually coded 400 queries and compared the results from this manual classification to the results determined by the automated method. This comparison showed that the automatic classification has an accuracy of 74%. Of the remaining 25% of the queries, the user intent is vague or multi-faceted, pointing to the need for probabilistic classification. We discuss how search engines can use knowledge of user intent to provide more targeted and relevant results in Web searching.
Resumo:
Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.