422 resultados para web app, matching domanda offerta


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many existing information retrieval models do not explicitly take into account in- formation about word associations. Our approach makes use of rst and second order relationships found in natural language, known as syntagmatic and paradigmatic associ- ations, respectively. This is achieved by using a formal model of word meaning within the query expansion process. On ad hoc retrieval, our approach achieves statistically sig- ni cant improvements in MAP (0.158) and P@20 (0.396) over our baseline model. The ERR@20 and nDCG@20 of our system was 0.249 and 0.192 respectively. Our results and discussion suggest that information about both syntagamtic and paradigmatic associa- tions can assist with improving retrieval eectiveness on ad hoc retrieval.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The LiteSteel Beam (LSB) is a new hollow flange section developed in Australia with a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. The LSB is subjected to a relatively new Lateral Distortional Buckling (LDB) mode when used as flexural members. Unlike the commonly observed lateral torsional buckling, lateral distortional buckling of LSBs is characterised by cross sectional change due to web distortion. Lateral distortional buckling causes significant moment capacity reduction for LSBs with intermediate spans. Therefore a detailed investigation was undertaken to determine the methods of reducing the effects of lateral distortional buckling in LSB flexural members. For this purpose the use of web stiffeners was investigated using finite element analyses of LSBs with different web stiffener spacing and sizes. It was found that the use of 5 mm steel plate stiffeners welded or screwed to the inner faces of the top and bottom flanges at third span points considerably reduced the lateral distortional buckling effects in LSBs. Suitable design rules were then developed to calculate the enhanced elastic lateral distortional buckling moments and the higher ultimate moment capacities of LSBs with the chosen web stiffener arrangement. This paper presents the details of this investigation and the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data structures such as k-D trees and hierarchical k-means trees perform very well in approximate k nearest neighbour matching, but are only marginally more effective than linear search when performing exact matching in high-dimensional image descriptor data. This paper presents several improvements to linear search that allows it to outperform existing methods and recommends two approaches to exact matching. The first method reduces the number of operations by evaluating the distance measure in order of significance of the query dimensions and terminating when the partial distance exceeds the search threshold. This method does not require preprocessing and significantly outperforms existing methods. The second method improves query speed further by presorting the data using a data structure called d-D sort. The order information is used as a priority queue to reduce the time taken to find the exact match and to restrict the range of data searched. Construction of the d-D sort structure is very simple to implement, does not require any parameter tuning, and requires significantly less time than the best-performing tree structure, and data can be added to the structure relatively efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An algorithm for computing dense correspondences between images of a stereo pair or image sequence is presented. The algorithm can make use of both standard matching metrics and the rank and census filters, two filters based on order statistics which have been applied to the image matching problem. Their advantages include robustness to radiometric distortion and amenability to hardware implementation. Results obtained using both real stereo pairs and a synthetic stereo pair with ground truth were compared. The rank and census filters were shown to significantly improve performance in the case of radiometric distortion. In all cases, the results obtained were comparable to, if not better than, those obtained using standard matching metrics. Furthermore, the rank and census have the additional advantage that their computational overhead is less than these metrics. For all techniques tested, the difference between the results obtained for the synthetic stereo pair, and the ground truth results was small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rank and census are two filters based on order statistics which have been applied to the image matching problem for stereo pairs. Advantages of these filters include their robustness to radiometric distortion and small amounts of random noise, and their amenability to hardware implementation. In this paper, a new matching algorithm is presented, which provides an overall framework for matching, and is used to compare the rank and census techniques with standard matching metrics. The algorithm was tested using both real stereo pairs and a synthetic pair with ground truth. The rank and census filters were shown to significantly improve performance in the case of radiometric distortion. In all cases, the results obtained were comparable to, if not better than, those obtained using standard matching metrics. Furthermore, the rank and census have the additional advantage that their computational overhead is less than these metrics. For all techniques tested, the difference between the results obtained for the synthetic stereo pair, and the ground truth results was small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We sought to determine the impact of electrospinning parameters on a trustworthy criterion that could evidently improve the maximum applicability of fibrous scaffolds for tissue regeneration. We used an image analysis technique to elucidate the web permeability index (WPI) by modeling the formation of electrospun scaffolds. Poly(3-hydroxybutyrate) (P3HB) scaffolds were fabricated according to predetermined conditions of levels in a Taguchi orthogonal design. The material parameters were the polymer concentration, conductivity, and volatility of the solution. The processing parameters were the applied voltage and nozzle-to-collector distance. With a law to monitor the WPI values when the polymer concentration or the applied voltage was increased, the pore interconnectivity was decreased. The quality of the jet instability altered the pore numbers, areas, and other structural characteristics, all of which determined the scaffold porosity and aperture interconnectivity. An initial drastic increase was observed in the WPI values because of the chain entanglement phenomenon above a 6 wt % P3HB content. Although the solution mixture significantly (p < 0.05) changed the scaffold architectural characteristics as a function of the solution viscosity and surface tension, it had a minor impact on the WPI values. The solution mixture gained the third place of significance, and the distance was approved as the least important factor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Production of nanofibrous polyacrylonitrile/calcium carbonate (PAN/CaCO3) nanocomposite web was carried out through solution electrospinning process. Pore generating nanoparticles were leached from the PAN matrices in hydrochloric acid bath with the purpose of producing an ultimate nanoporous structure. The possible interaction between CaCO3 nanoparticles and PAN functional groups was investigated. Atomic absorption method was used to measure the amount of extracted CaCO3 nanoparticles. Morphological observation showed nanofibers of 270–720 nm in diameter containing nanopores of 50–130 nm. Monitoring the governing parameters statistically, it was found that the amount of extraction (ε) of CaCO3was increased when the web surface area (a) was broadened according to a simple scaling law (ε = 3.18 a0.4). The leaching process was maximized in the presence of 5% v/v of acid in the extraction bath and 5 wt % of CaCO3 in the polymer solution. Collateral effects of the extraction time and temperature showed exponential growth within a favorable extremum at 50°C for 72 h. Concentration of dimethylformamide as the solvent had no significant impact on the extraction level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports research into teacher-­‐librarians’ perceptions of using social media and Web 2.0 in teaching and learning. A pilot study was conducted with teacher-­‐librarians in five government schools and five private schools in southeast Queensland. The findings revealed that there was a strong digital divide between government schools and private schools, with government schools suffering severe restrictions on the use of social media and Web 2.0, leading to an unsophisticated use of these technologies. It is argued that internet ‘over-­‐ blocking’ may lead to government school students not being empowered to manage risks in an open internet environment. Furthermore, their use of information for academic and recreational learning may be compromised. This has implications particularly for low socioeconomic students, leading to further inequity in the process and outcomes of Australian education.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the details of numerical studies on the shear behaviour and strength of lipped channel beams (LCBs) with stiffened web openings. Over the last couple of decades, cold-formed steel beams have been used extensively in residential, industrial and commercial buildings as primary load bearing structural components. Their shear strengths are considerably reduced when web openings are included for the purpose of locating building services. Our research has shown that shear strengths of LCBs were reduced by up to 70% due to the inclusion of web openings. Hence there is a need to improve the shear strengths of LCBs with web openings. A cost effective way to improve the detrimental effects of a large web opening is to attach appropriate stiffeners around the web openings in order to restore the original shear strength and stiffness of LCBs. Hence numerical studies were undertaken to investigate the shear strengths of LCBs with stiffened web openings. In this research, finite element models of LCBs with stiffened web openings in shear were developed to simulate the shear behaviour and strength of LCBs. Various stiffening methods using plate and LCB stud stiffeners attached to LCBs using screw-fastening were attempted. The developed models were then validated by comparing their results with experimental results and used in parametric studies. Both finite element analysis and experimental results showed that the stiffening arrangements recommended by past re-search for cold-formed steel channel beams are not adequate to restore the shear strengths of LCBs with web openings. Therefore new stiffener arrangements were proposed for LCBs with web openings based on experimental and finite element analysis results. This paper presents the details of finite element models and analyses used in this research and the results including the recommended stiffener arrangements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose - Researchers debate whether tacit knowledge sharing through Information Technology (IT) is actually possible. However, with the advent of social web tools, it has been argued that most shortcomings of tacit knowledge sharing are likely to disappear. This paper has two purposes: firstly, to demonstrate the existing debates in the literature regarding tacit knowledge sharing using IT, and secondly, to identify key research gaps that lay the foundations for future research into tacit knowledge sharing using social web. Design/methodology/approach - This paper reviews current literature on IT-mediated tacit knowledge sharing and opens a discussion on tacit knowledge sharing through the use of social web. Findings - First, the existing schools of thoughts in regards to IT ability for tacit knowledge sharing are introduced. Next, difficulties of sharing tacit knowledge through the use of IT are discussed. Then, potentials and pitfalls of social web tools are presented. Finally, the paper concludes that whilst there are significant theoretical arguments supporting that the social web facilitates tacit knowledge sharing there is a lack of empirical evidence to support these arguments and further work is required. Research limitations/implications - The limitations of the review includes: covering only papers that were published in English, issues of access to full texts of some resources, possibility of missing some resources due to search strings used or limited coverage of databases searched. Originality/value - The paper contributes to the fast growing literature on the intersection of KM and IT particularly by focusing on tacit knowledge sharing in social media space. The paper highlights the need for further studies in this area by discussing the current situation in the literature and disclosing the emerging questions and gaps for future studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, recommender systems (RS) have been widely applied in many commercial e-commerce sites to help users deal with the information overload problem. Recommender systems provide personalized recommendations to users and thus help them in making good decisions about which product to buy from the vast number of product choices available to them. Many of the current recommender systems are developed for simple and frequently purchased products like books and videos, by using collaborative-filtering and content-based recommender system approaches. These approaches are not suitable for recommending luxurious and infrequently purchased products as they rely on a large amount of ratings data that is not usually available for such products. This research aims to explore novel approaches for recommending infrequently purchased products by exploiting user generated content such as user reviews and product click streams data. From reviews on products given by the previous users, association rules between product attributes are extracted using an association rule mining technique. Furthermore, from product click streams data, user profiles are generated using the proposed user profiling approach. Two recommendation approaches are proposed based on the knowledge extracted from these resources. The first approach is developed by formulating a new query from the initial query given by the target user, by expanding the query with the suitable association rules. In the second approach, a collaborative-filtering recommender system and search-based approaches are integrated within a hybrid system. In this hybrid system, user profiles are used to find the target user’s neighbour and the subsequent products viewed by them are then used to search for other relevant products. Experiments have been conducted on a real world dataset collected from one of the online car sale companies in Australia to evaluate the effectiveness of the proposed recommendation approaches. The experiment results show that user profiles generated from user click stream data and association rules generated from user reviews can improve recommendation accuracy. In addition, the experiment results also prove that the proposed query expansion and the hybrid collaborative filtering and search-based approaches perform better than the baseline approaches. Integrating the collaborative-filtering and search-based approaches has been challenging as this strategy has not been widely explored so far especially for recommending infrequently purchased products. Therefore, this research will provide a theoretical contribution to the recommender system field as a new technique of combining collaborative-filtering and search-based approaches will be developed. This research also contributes to a development of a new query expansion technique for infrequently purchased products recommendation. This research will also provide a practical contribution to the development of a prototype system for recommending cars.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Building and maintaining software are not easy tasks. However, thanks to advances in web technologies, a new paradigm is emerging in software development. The Service Oriented Architecture (SOA) is a relatively new approach that helps bridge the gap between business and IT and also helps systems remain exible. However, there are still several challenges with SOA. As the number of available services grows, developers are faced with the problem of discovering the services they need. Public service repositories such as Programmable Web provide only limited search capabilities. Several mechanisms have been proposed to improve web service discovery by using semantics. However, most of these require manually tagging the services with concepts in an ontology. Adding semantic annotations is a non-trivial process that requires a certain skill-set from the annotator and also the availability of domain ontologies that include the concepts related to the topics of the service. These issues have prevented these mechanisms becoming widespread. This thesis focuses on two main problems. First, to avoid the overhead of manually adding semantics to web services, several automatic methods to include semantics in the discovery process are explored. Although experimentation with some of these strategies has been conducted in the past, the results reported in the literature are mixed. Second, Wikipedia is explored as a general-purpose ontology. The benefit of using it as an ontology is assessed by comparing these semantics-based methods to classic term-based information retrieval approaches. The contribution of this research is significant because, to the best of our knowledge, a comprehensive analysis of the impact of using Wikipedia as a source of semantics in web service discovery does not exist. The main output of this research is a web service discovery engine that implements these methods and a comprehensive analysis of the benefits and trade-offs of these semantics-based discovery approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The article aims to review a university course, offered to students in both Australia and Germany, to encourage them to learn about designing, implementing, marketing and evaluating information programs and services in order to build active and engaged communities. The concepts and processes of Web 2.0 technologies come together in the learning activities, with students establishing their own personal learning networks (PLNs). Design/methodology/approach – The case study examines the principles of learning and teaching that underpin the course and presents the students' own experiences of the challenges they faced as they explored the interactive, participative and collaborative dimensions of the web. Findings – The online format of the course and the philosophy of learning through play provided students with a safe and supportive environment for them to move outside of their comfort zones, to be creative, to experiment and to develop their professional personas. Reflection on learning was a key component that stressed the value of reflective practice in assisting library and information science (LIS) professionals to adapt confidently to the rapidly changing work environment. Originality/value – This study provides insights into the opportunities for LIS courses to work across geographical boundaries, to allow students to critically appraise library practice in different contexts and to become active participants in wider professional networks.