122 resultados para Courant metric


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this review is to integrate and summarize specific measurement topics (instrument and metric choice, validity, reliability, how many and what types of days, reactivity, and data treatment) appropriate to the study of youth physical activity. Research quality pedometers are necessary to aid interpretation of steps per day collected in a range of young populations under a variety of circumstances. Steps per day is the most appropriate metric choice, but steps per minute can be used to interpret time-in-intensity in specifically delimited time periods (e.g., physical education class). Reported intraclass correlations (ICC) have ranged from .65 over 2 days (although higher values also have been reported for 2 days) to .87 over 8 days (although higher values have been reported for fewer days). Reported ICCs are lower on weekend days (.59) versus weekdays (.75) and lower over vacation days (.69) versus school days (.74). There is no objective evidence of reactivity at this time. Data treatment includes (a) identifying and addressing missing values, (b) identifying outliers and reducing data appropriately if necessary, and (c) transforming the data as required in preparation for inferential analysis. As more pedometry studies in young populations are published, these preliminary methodological recommendations should be modified and refined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic load sharing can be defined as a measure of the ability of a heavy vehicle multi-axle group to equalise load across its wheels under typical travel conditions; i.e. in the dynamic sense at typical travel speeds and operating conditions of that vehicle. Various attempts have been made to quantify the ability of heavy vehicles to equalise the load across their wheels during travel. One of these was the concept of the load sharing coefficient (LSC). Other metrics such as the dynamic load coefficient (DLC), peak dynamic wheel force (PDWF) and dynamic impact force (DIF) have been used to compare one heavy vehicle suspension with another for potential road damage. This paper compares these metrics and determines a relationship between DLC and LSC with sensitivity analysis of this relationship. The shortcomings of the presently-available metrics are discussed with a new metric proposed - the dynamic load equalisation (DLE) measure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In vector space based approaches to natural language processing, similarity is commonly measured by taking the angle between two vectors representing words or documents in a semantic space. This is natural from a mathematical point of view, as the angle between unit vectors is, up to constant scaling, the only unitarily invariant metric on the unit sphere. However, similarity judgement tasks reveal that human subjects fail to produce data which satisfies the symmetry and triangle inequality requirements for a metric space. A possible conclusion, reached in particular by Tversky et al., is that some of the most basic assumptions of geometric models are unwarranted in the case of psychological similarity, a result which would impose strong limits on the validity and applicability vector space based (and hence also quantum inspired) approaches to the modelling of cognitive processes. This paper proposes a resolution to this fundamental criticism of of the applicability of vector space models of cognition. We argue that pairs of words imply a context which in turn induces a point of view, allowing a subject to estimate semantic similarity. Context is here introduced as a point of view vector (POVV) and the expected similarity is derived as a measure over the POVV's. Different pairs of words will invoke different contexts and different POVV's. Hence the triangle inequality ceases to be a valid constraint on the angles. We test the proposal on a few triples of words and outline further research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is known that the depth of focus (DOF) of the human eye can be affected by the higher order aberrations. We estimated the optimal combinations of primary and secondary Zernike spherical aberration to expand the DOF and evaluated their efficiency in real eyes using an adaptive optics system. The ratio between increased DOF and loss of visual acuity was used as the performance indicator. The results indicate that primary or secondary spherical aberration alone shows similar effectiveness in extending the DOF. However, combinations of primary and secondary spherical aberration with different signs provide better efficiency for expanding the DOF. This finding suggests that the optimal combinations of primary and secondary spherical aberration may be useful in the design of optical presbyopic corrections. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Online learning algorithms have recently risen to prominence due to their strong theoretical guarantees and an increasing number of practical applications for large-scale data analysis problems. In this paper, we analyze a class of online learning algorithms based on fixed potentials and nonlinearized losses, which yields algorithms with implicit update rules. We show how to efficiently compute these updates, and we prove regret bounds for the algorithms. We apply our formulation to several special cases where our approach has benefits over existing online learning methods. In particular, we provide improved algorithms and bounds for the online metric learning problem, and show improved robustness for online linear prediction problems. Results over a variety of data sets demonstrate the advantages of our framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we describe a body of work aimed at extending the reach of mobile navigation and mapping. We describe how running topological and metric mapping and pose estimation processes concurrently, using vision and laser ranging, has produced a full six-degree-of-freedom outdoor navigation system. It is capable of producing intricate three-dimensional maps over many kilometers and in real time. We consider issues concerning the intrinsic quality of the built maps and describe our progress towards adding semantic labels to maps via scene de-construction and labeling. We show how our choices of representation, inference methods and use of both topological and metric techniques naturally allow us to fuse maps built from multiple sessions with no need for manual frame alignment or data association.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the multi-view approach to semisupervised learning, we choose one predictor from each of multiple hypothesis classes, and we co-regularize our choices by penalizing disagreement among the predictors on the unlabeled data. We examine the co-regularization method used in the co-regularized least squares (CoRLS) algorithm, in which the views are reproducing kernel Hilbert spaces (RKHS's), and the disagreement penalty is the average squared difference in predictions. The final predictor is the pointwise average of the predictors from each view. We call the set of predictors that can result from this procedure the co-regularized hypothesis class. Our main result is a tight bound on the Rademacher complexity of the co-regularized hypothesis class in terms of the kernel matrices of each RKHS. We find that the co-regularization reduces the Rademacher complexity by an amount that depends on the distance between the two views, as measured by a data dependent metric. We then use standard techniques to bound the gap between training error and test error for the CoRLS algorithm. Experimentally, we find that the amount of reduction in complexity introduced by co regularization correlates with the amount of improvement that co-regularization gives in the CoRLS algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sutchi catfish (Pangasianodon hypophthalmus) – known more universally by the Vietnamese name ‘Tra’ is an economically important freshwater fish in the Mekong Delta in Vietnam that constitutes an important food resource. Artificial propagation technology for Tra catfish has only recently been developed along the main branches of the Mekong River where more than 60% of the local human population participate in fishing or aquaculture. Extensive support for catfish culture in general, and that of Tra (P. hypophthalmus) in particular, has been provided by the Vietnamese government to increase both the scale of production and to develop international export markets. In 2006, total Vietnamese catfish exports reached approximately 286,602 metric tons (MT) and were valued at 736.87 $M with a number of large new export destinations being developed. Total value of production from catfish culture has been predicted to increase to approximately USD 1 billion by 2020. While freshwater catfish culture in Vietnam has a promising future, concerns have been raised about long-term quality of fry and the effectiveness of current brood stock management practices, issues that have been largely neglected to date. In this study, four DNA markers (microsatellite loci: CB4, CB7, CB12 and CB13) that were developed specifically for Tra (P. hypophthalmus) in an earlier study were applied to examine the genetic quality of artificially propagated Tra fry in the Mekong Delta in Vietnam. The goals of the study were to assess: (i) how well available levels of genetic variation in Tra brood stock used for artificial propagation in the Mekong Delta of Vietnam (breeders from three private hatcheries and Research Institute of Aquaculture No2 (RIA2) founders) has been conserved; and (ii) whether or not genetic diversity had declined significantly over time in a stock improvement program for Tra catfish at RIA2. A secondary issue addressed was how genetic markers could best be used to assist industry development. DNA was extracted from fins of catfish collected from the two main branches of the Mekong River inf Vietnam, three private hatcheries and samples from the Tra improvement program at RIA2. Study outcomes: i) Genetic diversity estimates for Tra brood stock samples were similar to, and slightly higher than, wild reference samples. In addition, the relative contribution by breeders to fry in commercial private hatcheries strongly suggest that the true Ne is likely to be significantly less than the breeder numbers used; ii) in a stock improvement program for Tra catfish at RIA2, no significant differences were detected in gene frequencies among generations (FST=0.021, P=0.036>0.002 after Bonferroni correction); and only small differences were observed in alleles frequencies among sample populations. To date, genetic markers have not been applied in the Tra catfish industry, but in the current project they were used to evaluate the levels of genetic variation in the Tra catfish selective breeding program at RIA2 and to undertake genetic correlations between genetic marker and trait variation. While no associations were detected using only four loci, they analysis provided training in the practical applications of the use of molecular markers in aquaculture in general, and in Tra culture, in particular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the third year of the Link the Wiki track, the focus has been shifted to anchor-to-bep link discovery. The participants were encouraged to utilize different technologies to resolve the issue of focused link discovery. Apart from the 2009 Wikipedia collection, the Te Ara collection was introduced for the first time in INEX. For the link the wiki tasks, 5000 file-to-file topics were randomly selected and 33 anchor-to-bep topics were nominated by the participants. The Te Ara collection does not contain hyperlinks and the task was to cross link the entire collection. A GUI tool for self-verification of the linking results was distributed. This helps participants verify the location of the anchor and bep. The assessment tool and the evaluation tool were revised to improve efficiency. Submission runs were evaluated against Wikipedia ground-truth and manual result set respectively. Focus-based evaluation was undertaken using a new metric. Evaluation results are presented and link discovery approaches are described

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of constructing randomized online algorithms for the Metrical Task Systems (MTS) problem on a metric δ against an oblivious adversary. Restricting our attention to the class of “work-based” algorithms, we provide a framework for designing algorithms that uses the technique of regularization. For the case when δ is a uniform metric, we exhibit two algorithms that arise from this framework, and we prove a bound on the competitive ratio of each. We show that the second of these algorithms is ln n + O(loglogn) competitive, which is the current state-of-the art for the uniform MTS problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes an innovative instance similarity based evaluation metric that reduces the search map for clustering to be performed. An aggregate global score is calculated for each instance using the novel idea of Fibonacci series. The use of Fibonacci numbers is able to separate the instances effectively and, in hence, the intra-cluster similarity is increased and the inter-cluster similarity is decreased during clustering. The proposed FIBCLUS algorithm is able to handle datasets with numerical, categorical and a mix of both types of attributes. Results obtained with FIBCLUS are compared with the results of existing algorithms such as k-means, x-means expected maximization and hierarchical algorithms that are widely used to cluster numeric, categorical and mix data types. Empirical analysis shows that FIBCLUS is able to produce better clustering solutions in terms of entropy, purity and F-score in comparison to the above described existing algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detecting query reformulations within a session by a Web searcher is an important area of research for designing more helpful searching systems and targeting content to particular users. Methods explored by other researchers include both qualitative (i.e., the use of human judges to manually analyze query patterns on usually small samples) and nondeterministic algorithms, typically using large amounts of training data to predict query modification during sessions. In this article, we explore three alternative methods for detection of session boundaries. All three methods are computationally straightforward and therefore easily implemented for detection of session changes. We examine 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005. We compare session analysis using (a) Internet Protocol address and cookie; (b) Internet Protocol address, cookie, and a temporal limit on intrasession interactions; and (c) Internet Protocol address, cookie, and query reformulation patterns. Overall, our analysis shows that defining sessions by query reformulation along with Internet Protocol address and cookie provides the best measure, resulting in an 82% increase in the count of sessions. Regardless of the method used, the mean session length was fewer than three queries, and the mean session duration was less than 30 min. Searchers most often modified their query by changing query terms (nearly 23% of all query modifications) rather than adding or deleting terms. Implications are that for measuring searching traffic, unique sessions may be a better indicator than the common metric of unique visitors. This research also sheds light on the more complex aspects of Web searching involving query modifications and may lead to advances in searching tools.