806 resultados para Inverse-distance weighting


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces PartSS, a new partition-based fil- tering for tasks performing string comparisons under edit distance constraints. PartSS offers improvements over the state-of-the-art method NGPP with the implementation of a new partitioning scheme and also improves filtering abil- ities by exploiting theoretical results on shifting and scaling ranges, thus accelerating the rate of calculating edit distance between strings. PartSS filtering has been implemented within two major tasks of data integration: similarity join and approximate membership extraction under edit distance constraints. The evaluation on an extensive range of real-world datasets demonstrates major gain in efficiency over NGPP and QGrams approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of eLearning has seen online discussion forums widely used in both undergraduate and postgraduate nursing education. This paper reports an Australian university experience of design, delivery and redevelopment of a distance education module developed for Vietnamese nurse academics. The teaching experience of Vietnamese nurse academics is mixed and frequently limited. It was decided that the distance module should attempt to utilise the experience of senior Vietnamese nurse academics - asynchronous online discussion groups were used to facilitate this. Online discussion occurred in both Vietnamese and English and was moderated by an Australian academic working alongside a Vietnamese translator. This paper will discuss the design of an online learning environment for foreign correspondents, the resources and translation required to maximise the success of asynchronous online discussion groups, as well as the rationale of delivering complex content in a foreign language. While specifically addressing the first iteration of the first distance module designed, this paper will also address subsequent changes made for the second iteration of the module and comment on their success. While a translator is clearly a key component of success, the elements of simplicity and clarity combined with supportive online moderation must not be overlooked.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2008 a move away from medical staff providing nursing education in Vietnam saw the employment of many new nurse academics. To assist in the instruction of these novice academics and provide them with sound teaching and learning practice as well as curriculum design and implementation skills, Queensland University of Technology (QUT) successfully tendered an international grant. One of QUT’s initiatives in educating the Vietnamese academics was a distance learning programme. Developed specifically for Vietnamese nurse academics, the programme was designed for Australian based delivery to academics in Vietnam. This paper will present an overview of why four separate modules were utilised for the delivery of content (modules were delivered at a rate of one per semester). It will address bilingual online discussion boards which were used in each of the modules and the process of moderating these given comments were posted in both Vietnamese and English. It will describe how content was scaffolded across four modules and how the modules themselves modelled new teaching delivery strategies. Lastly, it will discuss the considerations of programme delivery given the logistics of an Australian based delivery. Feedback from the Vietnamese nurse academics across their involvement in the programme (and at the conclusion of their fourth and final module) has been overwhelmingly positive. Feedback suggests the programme has altered teaching and assessment approaches used by some Vietnamese nurse academics. Additionally, Vietnamese nurse academics are reporting that they are engaging more with the application of their content indicating a cultural shift in the approach taken in Vietnamese nurse education.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a graph-based method to weight medical concepts in documents for the purposes of information retrieval. Medical concepts are extracted from free-text documents using a state-of-the-art technique that maps n-grams to concepts from the SNOMED CT medical ontology. In our graph-based concept representation, concepts are vertices in a graph built from a document, edges represent associations between concepts. This representation naturally captures dependencies between concepts, an important requirement for interpreting medical text, and a feature lacking in bag-of-words representations. We apply existing graph-based term weighting methods to weight medical concepts. Using concepts rather than terms addresses vocabulary mismatch as well as encapsulates terms belonging to a single medical entity into a single concept. In addition, we further extend previous graph-based approaches by injecting domain knowledge that estimates the importance of a concept within the global medical domain. Retrieval experiments on the TREC Medical Records collection show our method outperforms both term and concept baselines. More generally, this work provides a means of integrating background knowledge contained in medical ontologies into data-driven information retrieval approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pumice is an extremely effective rafting agent that can dramatically increase the dispersal range of a variety of marine organisms and connect isolated shallow marine and coastal ecosystems. Here we report on a significant recent pumice rafting and long-distance dispersal event that occurred across the southwest Pacific following the 2006 explosive eruption of Home Reef Volcano in Tonga. We have constrained the trajectory, and rate, biomass and biodiversity of transfer, discovering more than 80 species and a substantial biomass underwent a .5000 km journey in 7–8 months. Differing microenvironmental conditions on the pumice, caused by relative stability of clasts at the sea surface, promoted diversity in biotic recruitment. Our findings emphasise pumice rafting as an important process facilitating the distribution of marine life, which have implications for colonisation processes and success, the management of sensitive marine environments, and invasive pest species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An efficient numerical method to compute nonlinear solutions for two-dimensional steady free-surface flow over an arbitrary channel bottom topography is presented. The approach is based on a boundary integral equation technique which is similar to that of Vanden-Broeck's (1996, J. Fluid Mech., 330, 339-347). The typical approach for this problem is to prescribe the shape of the channel bottom topography, with the free-surface being provided as part of the solution. Here we take an inverse approach and prescribe the shape of the free-surface a priori while solving for the corresponding bottom topography. We show how this inverse approach is particularly useful when studying topographies that give rise to wave-free solutions, allowing us to easily classify eleven basic flow types. Finally, the inverse approach is also adapted to calculate a distribution of pressure on the free-surface, given the free-surface shape itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mechanical conditions in the repair tissues are known to influence the outcome of fracture healing. These mechanical conditions are determined by the stiffness of fixation and limb loading. Experimental studies have shown that there is a range of beneficial fixation stiffness for timely healing and that fixation stiffness that is either too flexible or too stiff impairs callus healing. However, much less is known about how mechanical conditions influence the biological processes that make up the sequence of bone repair and if indeed mechanical stimulation is required at all stages of repair. Secondary bone healing occurs through a sequence of events broadly characterised by inflammation, proliferation, consolidation and remodelling. It is our hypothesis that a change in fixation stiffness from very flexible to stiff can shorten the time to healing relative to constant fixation stiffness. Flexible fixation has the benefit of promoting greater callus formation and needs to be applied during the proliferative stage of repair. The greater callus size helps to stabilize the fragments earlier allowing mineralization to occur faster. Together with stable/rigid fixation applied during the latter stage of repair to ensure mineralization of the callus. The predicted benefits of inverse dynamization are shortened healing in comparison to very flexible fixation and healing time comparable or faster than stable fixation with greater callus stiffness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Normally, vehicles queued at an intersection reach maximum flow rate after the fourth vehicle and results in a start-up lost time. This research demonstrated that the Enlarged Stopping Distance (ESD) concept could assist in reducing the start-up time and therefore increase traffic flow capacity at signalised intersections. In essence ESD gives sufficient space for a queuing vehicle to accelerate simultaneously without having to wait for the front vehicle to depart, hence reducing start-up lost time. In practice, the ESD concept would be most effective when enlarged stopping distance between the first and second vehicle allowing faster clearance of the intersection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This practice-led research examines the generative function of loss in fiction that explores themes of grief and longing. This research considers how loss may be understood as a structuring mechanism through which characters evaluate time, resolve loss and affect future change. The creative work is a work of literary fiction titled A Distance Too Far Away. Aubrey, the story’s protagonist, is a woman in her twenties living in Brisbane in the early 1980s, carving out an independent life for herself away from her family. Through a flashback narrative sequence, told from the perspective of the twelve year narrator, Aubrey retraces a significant point of rupture in her life following a series of family tragedies. A Distance Too Far Away explores the tension between belonging and freedom, and considers how the past provides a malleable space for illuminating desire in order to traverse the gap between the world as it is and the world as we want it to be. The exegetical component of this research considers an alternative critical frame for interpreting the work of American author Anne Tyler, a writer who has had a significant influence on my own practice. Frequently criticised for creating sentimental and inert characters, many critics observe that nothing happens in Tyler’s circular plots. This research challenges these assertions, and through a contextual analysis of Tyler’s Ladder of Years (1995) investigates how Tyler engages with memory and nostalgia in order to move across time and resolve loss.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A procedure for the evaluation of multiple scattering contributions is described, for deep inelastic neutron scattering (DINS) studies using an inverse geometry time-of-flight spectrometer. The accuracy of a Monte Carlo code DINSMS, used to calculate the multiple scattering, is tested by comparison with analytic expressions and with experimental data collected from polythene, polycrystalline graphite and tin samples. It is shown that the Monte Carlo code gives an accurate representation of the measured data and can therefore be used to reliably correct DINS data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The count-min sketch is a useful data structure for recording and estimating the frequency of string occurrences, such as passwords, in sub-linear space with high accuracy. However, it cannot be used to draw conclusions on groups of strings that are similar, for example close in Hamming distance. This paper introduces a variant of the count-min sketch which allows for estimating counts within a specified Hamming distance of the queried string. This variant can be used to prevent users from choosing popular passwords, like the original sketch, but it also allows for a more efficient method of analysing password statistics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses computer mediated distance learning on a Master's level course in the UK and student perceptions of this as a quality learning environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research has successfully applied super-resolution and multiple modality fusion techniques to address the major challenges of human identification at a distance using face and iris. The outcome of the research is useful for security applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.