297 resultados para matching


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Grouping users in social networks is an important process that improves matching and recommendation activities in social networks. The data mining methods of clustering can be used in grouping the users in social networks. However, the existing general purpose clustering algorithms perform poorly on the social network data due to the special nature of users' data in social networks. One main reason is the constraints that need to be considered in grouping users in social networks. Another reason is the need of capturing large amount of information about users which imposes computational complexity to an algorithm. In this paper, we propose a scalable and effective constraint-based clustering algorithm based on a global similarity measure that takes into consideration the users' constraints and their importance in social networks. Each constraint's importance is calculated based on the occurrence of this constraint in the dataset. Performance of the algorithm is demonstrated on a dataset obtained from an online dating website using internal and external evaluation measures. Results show that the proposed algorithm is able to increases the accuracy of matching users in social networks by 10% in comparison to other algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Changing environments present a number of challenges to mobile robots, one of the most significant being mapping and localisation. This problem is particularly significant in vision-based systems where illumination and weather changes can cause feature-based techniques to fail. In many applications only sections of an environment undergo extreme perceptual change. Some range-based sensor mapping approaches exploit this property by combining occasional place recognition with the assumption that odometry is accurate over short periods of time. In this paper, we develop this idea in the visual domain, by using occasional vision-driven loop closures to infer loop closures in nearby locations where visual recognition is difficult due to extreme change. We demonstrate successful map creation in an environment in which change is significant but constrained to one area, where both the vanilla CAT-Graph and a Sum of Absolute Differences matcher fails, use the described techniques to link dissimilar images from matching locations, and test the robustness of the system against false inferences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dynamic capabilities view (DCV) focuses on renewal of firms’ strategic knowledge resources so as to sustain competitive advantage within turbulent markets. Within the context of the DCV, the focus of knowledge management (KM) is to develop the KMC through deploying knowledge governance mechanisms that are conducive to facilitating knowledge processes so as to produce superior business performance over time. The essence of KM performance evaluation is to assess how well the KMC is configured with knowledge governance mechanisms and processes that enable a firm to achieve superior performance through matching its knowledge base with market needs. However, little research has been undertaken to evaluate KM performance from the DCV perspective. This study employed a survey study design and adopted hypothesis-testing approaches to develop a capability-based KM evaluation framework (CKMEF) that upholds the basic assertions of the DCV. Under the governance of the framework, a KM index (KMI) and a KM maturity model (KMMM) were derived not only to indicate the extent to which a firm’s KM implementations fulfill its strategic objectives, and to identify the evolutionary phase of its KMC, but also to bench-mark the KMC in the research population. The research design ensured that the evaluation framework and instruments have statistical significance and good generalizabilty to be applied in the research population, namely construction firms operating in the dynamic Hong Kong construction market. The study demonstrated the feasibility of quantitatively evaluating the development of the KMC and revealing the performance heterogeneity associated with the development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relevance feature and ontology are two core components to learn personalized ontologies for concept-based retrievals. However, how to associate user native information with common knowledge is an urgent issue. This paper proposes a sound solution by matching relevance feature mined from local instances with concepts existing in a global knowledge base. The matched concepts and their relations are used to learn personalized ontologies. The proposed method is evaluated elaborately by comparing it against three benchmark models. The evaluation demonstrates the matching is successful by achieving remarkable improvements in information filtering measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge has been widely recognised as a determinant of business performance. Business capabilities require an effective share of resource and knowledge. Specifically, knowledge sharing (KS) between different companies and departments can improve manufacturing processes since intangible knowledge plays an enssential role in achieving competitive advantage. This paper presents a mixed method research study into the impact of KS on the effectiveness of new product development (NPD) in achieving desired business performance (BP). Firstly, an empirical study utilising moderated regression analysis was conducted to test whether and to what extent KS has leveraging power on the relationship between NPD and BP constructs and variables. Secondly, this empirically verified hypothesis was validated through explanatory case studies involving two Taiwanese manufacturing companies using a qualitative interaction term pattern matching technique. The study provides evidence that knowledge sharing and management activities are essential for deriving competitive advantage in the manufacturing industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Undergraduate students studying the Bachelor of Radiation Therapy at Queensland University of Technology (QUT) attend clinical placements in a number of department sites across Queensland. To ensure that the curriculum prepares students for the most common treatments and current techniques in use in these departments, a curriculum matching exercise was performed. Methods: A cross-sectional census was performed on a pre-determined “Snapshot” date in 2012. This was undertaken by the clinical education staff in each department who used a standardized proforma to count the number of patients as well as prescription, equipment, and technique data for a list of tumour site categories. This information was combined into aggregate anonymized data. Results: All 12 Queensland radiation therapy clinical sites participated in the Snapshot data collection exercise to produce a comprehensive overview of clinical practice on the chosen day. A total of 59 different tumour sites were treated on the chosen day and as expected the most common treatment sites were prostate and breast, comprising 46% of patients treated. Data analysis also indicated that intensity-modulated radiotherapy (IMRT) use is relatively high with 19.6% of patients receiving IMRT treatment on the chosen day. Both IMRT and image-guided radiotherapy (IGRT) indications matched recommendations from the evidence. Conclusion: The Snapshot method proved to be a feasible and efficient method of gathering useful

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Small-angle and ultra-small-angle neutron scattering (SANS and USANS), low-pressure adsorption (N2 and CO2), and high-pressure mercury intrusion measurements were performed on a suite of North American shale reservoir samples providing the first ever comparison of all these techniques for characterizing the complex pore structure of shales. The techniques were used to gain insight into the nature of the pore structure including pore geometry, pore size distribution and accessible versus inaccessible porosity. Reservoir samples for analysis were taken from currently-active shale gas plays including the Barnett, Marcellus, Haynesville, Eagle Ford, Woodford, Muskwa, and Duvernay shales. Low-pressure adsorption revealed strong differences in BET surface area and pore volumes for the sample suite, consistent with variability in composition of the samples. The combination of CO2 and N2 adsorption data allowed pore size distributions to be created for micro–meso–macroporosity up to a limit of �1000 Å. Pore size distributions are either uni- or multi-modal. The adsorption-derived pore size distributions for some samples are inconsistent with mercury intrusion data, likely owing to a combination of grain compression during high-pressure intrusion, and the fact that mercury intrusion yields information about pore throat rather than pore body distributions. SANS/USANS scattering data indicate a fractal geometry (power-law scattering) for a wide range of pore sizes and provide evidence that nanometer-scale spatial ordering occurs in lower mesopore–micropore range for some samples, which may be associated with inter-layer spacing in clay minerals. SANS/USANS pore radius distributions were converted to pore volume distributions for direct comparison with adsorption data. For the overlap region between the two methods, the agreement is quite good. Accessible porosity in the pore size (radius) range 5 nm–10 lm was determined for a Barnett shale sample using the contrast matching method with pressurized deuterated methane fluid. The results demonstrate that accessible porosity is pore-size dependent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the use of fiducial markers (FMs) for the localisation of the prostate during external beam radiation therapy (EBRT) has become part of routine practice, radiation therapists (RTs) have become increasingly responsible for online image interpretation. The aim of this investigation was to quantify the limits of agreement (LoA) between RTs when localising to FMs with orthogonal kilovoltage (kV) imaging. Methods Six patients receiving prostate EBRT utilising FMs were included in this study. Treatment localisation was performed using kV imaging prior to each fraction. Online stereoscopic assessment of FMs, performed by the treating RTs, was compared with the offline assessment by three RTs. Observer agreement was determined by pairwise Bland-Altman analysis. Results Stereoscopic analysis of 225 image pairs was performed online at the time of treatment, and offline by three RT observers. Eighteen pairwise Bland-Altman analyses were completed to assess the level of agreement between observers. Localisation by RTs was found to be within clinically acceptable 95% LoAs. Conclusions Small differences between RTs, in both the online and offline setting, were found to be within clinically acceptable limits. RTs were able to make consistent and reliable judgements when matching FMs on planar kV imaging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Australian agriculture is faced with the dilemma of increasing food production for a growing domestic and world population while decreasing environmental impacts and supporting the social and economic future of regional communities. The challenge for farmers is compounded by declining rates of productivity growth which have been linked to changes in climate and decreasing investment in agricultural research. The answer must lie in understanding the ecological functionality of landscapes and matching management of agricultural systems and use of natural resources to landscape capacity in a changing climate. A simplified mixed grain and livestock farm case study is used to illustrate the challenges of assessing the potential for shifts in land allocation between commodities to achieve sustainable intensification of nutrition production. This study highlights the risks associated with overly-simplistic solutions and the need for increased investment in research to inform the development of practical strategies for increasing food production in Australian agro-ecosystems while managing the impacts of climate change and addressing climate change mitigation policies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To investigate the density of the primary epidermal lamellae (PEL) around the solar circumference of the forefeet of near-term fetal feral and nonferal (ie, domesticated) horses. Sample: Left forefeet from near-term Australian feral (n = 14) and domesticated (4) horse fetuses. Procedures: Near-term feral horse fetuses were obtained from culled mares within 10 minutes of death; fetuses that had died in utero 2 weeks prior to anticipated birth date and were delivered from live Thoroughbred mares were also obtained. Following disarticulation at the carpus, the left forefoot of each fetus was frozen during dissection and data collection. In a standard section of each hoof, the stratum internum PEL density was calculated at the midline center (12 o'clock) and the medial and lateral break-over points (11 and 1 o'clock), toe quarters (10 and 2 o'clock), and quarters (4 and 6 o'clock). Values for matching lateral and medial zones were averaged and expressed as 1 density. Density differences at the 4 locations between the feral and domesticated horse feet were assessed by use of imaging software analysis. Results: In fetal domesticated horse feet, PEL density did not differ among the 4 locations. In fetal feral horse feet, PEL density differed significantly among locations, with a pattern of gradual reduction from the dorsal to the palmar aspect of the foot. The PEL density distribution differed significantly between fetal domesticated and feral horse feet. Conclusions and Clinical Relevance: Results indicated that PEL density distribution differs between fetal feral and domesticated horse feet, suggestive of an adaptation of feral horses to environment challenges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The term Design Led Innovation is emerging as a fundamental business process, which is rapidly being adopted by large as well as small to medium sized firms. The value that design brings to an organisation is a different way of thinking, of framing situations and possibilities, doing things and tackling problems: essentially a cultural transformation of the way the firm undertakes its business. Being Design Led is increasingly being seen by business as a driver of company growth, allowing firms to provide a strong point of difference to its stakeholders. Achieving this Design Led process, requires strong leadership to enable the organisation to develop a clear vision for top line growth. Specifically, based on deep customer insights and expanded through customer and stakeholder engagements, the outcomes of which are then adopted by all aspects of the business. To achieve this goal, several tools and processes are available, which need to be linked to new organisational capabilities within a business transformation context. The Design Led Innovation Team focuses on embedding tools and processes within an organisation and matching this with design leadership qualities to enable companies to create breakthrough innovation and achieve sustained growth, through ultimately transforming their business model. As all information for these case studies was derived from publicly accessed data, this resource is not intended to be used as reference material, but rather is a learning tool for designers to begin to consider and explore businesses at a strategic level. It is not the results that are key, but rather the process and philosophies that were used to create these case studies and disseminate this way of thinking amongst the design community. It is this process of unpacking a business guided by the framework of Osterwalder’s Business Model Canvas* which provides an important tool for designers to gain a greater perspective of a company’s true innovation potential.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of automatically estimating the relative pose between a push-broom LIDAR and a camera without the need for artificial calibration targets or other human intervention. Further we do not require the sensors to have an overlapping field of view, it is enough that they observe the same scene but at different times from a moving platform. Matching between sensor modalities is achieved without feature extraction. We present results from field trials which suggest that this new approach achieves an extrinsic calibration accuracy of millimeters in translation and deci-degrees in rotation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a new simulation methodology in order to obtain exact or approximate Bayesian inference for models for low-valued count time series data that have computationally demanding likelihood functions. The algorithm fits within the framework of particle Markov chain Monte Carlo (PMCMC) methods. The particle filter requires only model simulations and, in this regard, our approach has connections with approximate Bayesian computation (ABC). However, an advantage of using the PMCMC approach in this setting is that simulated data can be matched with data observed one-at-a-time, rather than attempting to match on the full dataset simultaneously or on a low-dimensional non-sufficient summary statistic, which is common practice in ABC. For low-valued count time series data we find that it is often computationally feasible to match simulated data with observed data exactly. Our particle filter maintains $N$ particles by repeating the simulation until $N+1$ exact matches are obtained. Our algorithm creates an unbiased estimate of the likelihood, resulting in exact posterior inferences when included in an MCMC algorithm. In cases where exact matching is computationally prohibitive, a tolerance is introduced as per ABC. A novel aspect of our approach is that we introduce auxiliary variables into our particle filter so that partially observed and/or non-Markovian models can be accommodated. We demonstrate that Bayesian model choice problems can be easily handled in this framework.