317 resultados para web-warping
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.
Resumo:
Many existing information retrieval models do not explicitly take into account in- formation about word associations. Our approach makes use of rst and second order relationships found in natural language, known as syntagmatic and paradigmatic associ- ations, respectively. This is achieved by using a formal model of word meaning within the query expansion process. On ad hoc retrieval, our approach achieves statistically sig- ni cant improvements in MAP (0.158) and P@20 (0.396) over our baseline model. The ERR@20 and nDCG@20 of our system was 0.249 and 0.192 respectively. Our results and discussion suggest that information about both syntagamtic and paradigmatic associa- tions can assist with improving retrieval eectiveness on ad hoc retrieval.
Resumo:
The LiteSteel Beam (LSB) is a new hollow flange section developed in Australia with a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. The LSB is subjected to a relatively new Lateral Distortional Buckling (LDB) mode when used as flexural members. Unlike the commonly observed lateral torsional buckling, lateral distortional buckling of LSBs is characterised by cross sectional change due to web distortion. Lateral distortional buckling causes significant moment capacity reduction for LSBs with intermediate spans. Therefore a detailed investigation was undertaken to determine the methods of reducing the effects of lateral distortional buckling in LSB flexural members. For this purpose the use of web stiffeners was investigated using finite element analyses of LSBs with different web stiffener spacing and sizes. It was found that the use of 5 mm steel plate stiffeners welded or screwed to the inner faces of the top and bottom flanges at third span points considerably reduced the lateral distortional buckling effects in LSBs. Suitable design rules were then developed to calculate the enhanced elastic lateral distortional buckling moments and the higher ultimate moment capacities of LSBs with the chosen web stiffener arrangement. This paper presents the details of this investigation and the results.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
This paper reports research into teacher-‐librarians’ perceptions of using social media and Web 2.0 in teaching and learning. A pilot study was conducted with teacher-‐librarians in five government schools and five private schools in southeast Queensland. The findings revealed that there was a strong digital divide between government schools and private schools, with government schools suffering severe restrictions on the use of social media and Web 2.0, leading to an unsophisticated use of these technologies. It is argued that internet ‘over-‐ blocking’ may lead to government school students not being empowered to manage risks in an open internet environment. Furthermore, their use of information for academic and recreational learning may be compromised. This has implications particularly for low socioeconomic students, leading to further inequity in the process and outcomes of Australian education.
Resumo:
This paper presents the details of numerical studies on the shear behaviour and strength of lipped channel beams (LCBs) with stiffened web openings. Over the last couple of decades, cold-formed steel beams have been used extensively in residential, industrial and commercial buildings as primary load bearing structural components. Their shear strengths are considerably reduced when web openings are included for the purpose of locating building services. Our research has shown that shear strengths of LCBs were reduced by up to 70% due to the inclusion of web openings. Hence there is a need to improve the shear strengths of LCBs with web openings. A cost effective way to improve the detrimental effects of a large web opening is to attach appropriate stiffeners around the web openings in order to restore the original shear strength and stiffness of LCBs. Hence numerical studies were undertaken to investigate the shear strengths of LCBs with stiffened web openings. In this research, finite element models of LCBs with stiffened web openings in shear were developed to simulate the shear behaviour and strength of LCBs. Various stiffening methods using plate and LCB stud stiffeners attached to LCBs using screw-fastening were attempted. The developed models were then validated by comparing their results with experimental results and used in parametric studies. Both finite element analysis and experimental results showed that the stiffening arrangements recommended by past re-search for cold-formed steel channel beams are not adequate to restore the shear strengths of LCBs with web openings. Therefore new stiffener arrangements were proposed for LCBs with web openings based on experimental and finite element analysis results. This paper presents the details of finite element models and analyses used in this research and the results including the recommended stiffener arrangements.
Resumo:
Purpose - Researchers debate whether tacit knowledge sharing through Information Technology (IT) is actually possible. However, with the advent of social web tools, it has been argued that most shortcomings of tacit knowledge sharing are likely to disappear. This paper has two purposes: firstly, to demonstrate the existing debates in the literature regarding tacit knowledge sharing using IT, and secondly, to identify key research gaps that lay the foundations for future research into tacit knowledge sharing using social web. Design/methodology/approach - This paper reviews current literature on IT-mediated tacit knowledge sharing and opens a discussion on tacit knowledge sharing through the use of social web. Findings - First, the existing schools of thoughts in regards to IT ability for tacit knowledge sharing are introduced. Next, difficulties of sharing tacit knowledge through the use of IT are discussed. Then, potentials and pitfalls of social web tools are presented. Finally, the paper concludes that whilst there are significant theoretical arguments supporting that the social web facilitates tacit knowledge sharing there is a lack of empirical evidence to support these arguments and further work is required. Research limitations/implications - The limitations of the review includes: covering only papers that were published in English, issues of access to full texts of some resources, possibility of missing some resources due to search strings used or limited coverage of databases searched. Originality/value - The paper contributes to the fast growing literature on the intersection of KM and IT particularly by focusing on tacit knowledge sharing in social media space. The paper highlights the need for further studies in this area by discussing the current situation in the literature and disclosing the emerging questions and gaps for future studies.
Resumo:
Building and maintaining software are not easy tasks. However, thanks to advances in web technologies, a new paradigm is emerging in software development. The Service Oriented Architecture (SOA) is a relatively new approach that helps bridge the gap between business and IT and also helps systems remain exible. However, there are still several challenges with SOA. As the number of available services grows, developers are faced with the problem of discovering the services they need. Public service repositories such as Programmable Web provide only limited search capabilities. Several mechanisms have been proposed to improve web service discovery by using semantics. However, most of these require manually tagging the services with concepts in an ontology. Adding semantic annotations is a non-trivial process that requires a certain skill-set from the annotator and also the availability of domain ontologies that include the concepts related to the topics of the service. These issues have prevented these mechanisms becoming widespread. This thesis focuses on two main problems. First, to avoid the overhead of manually adding semantics to web services, several automatic methods to include semantics in the discovery process are explored. Although experimentation with some of these strategies has been conducted in the past, the results reported in the literature are mixed. Second, Wikipedia is explored as a general-purpose ontology. The benefit of using it as an ontology is assessed by comparing these semantics-based methods to classic term-based information retrieval approaches. The contribution of this research is significant because, to the best of our knowledge, a comprehensive analysis of the impact of using Wikipedia as a source of semantics in web service discovery does not exist. The main output of this research is a web service discovery engine that implements these methods and a comprehensive analysis of the benefits and trade-offs of these semantics-based discovery approaches.
Resumo:
Purpose – The article aims to review a university course, offered to students in both Australia and Germany, to encourage them to learn about designing, implementing, marketing and evaluating information programs and services in order to build active and engaged communities. The concepts and processes of Web 2.0 technologies come together in the learning activities, with students establishing their own personal learning networks (PLNs). Design/methodology/approach – The case study examines the principles of learning and teaching that underpin the course and presents the students' own experiences of the challenges they faced as they explored the interactive, participative and collaborative dimensions of the web. Findings – The online format of the course and the philosophy of learning through play provided students with a safe and supportive environment for them to move outside of their comfort zones, to be creative, to experiment and to develop their professional personas. Reflection on learning was a key component that stressed the value of reflective practice in assisting library and information science (LIS) professionals to adapt confidently to the rapidly changing work environment. Originality/value – This study provides insights into the opportunities for LIS courses to work across geographical boundaries, to allow students to critically appraise library practice in different contexts and to become active participants in wider professional networks.
Resumo:
Introduction: Participants may respond to phases of a workplace walking program at different rates. This study evaluated the factors that contribute to the number of steps through phases of the program. The intervention was automated through a web-based program designed to increase workday walking. Methods: The study reviewed independent variable influences throughout phases I–III. A convenience sample of university workers (n=56; 43.6±1.7 years; BMI 27.44±.2.15 kg/m2; 48 female) were recruited at worksites in Australia. These workers were given a pedometer (Yamax SW 200) and access to the website program. For analyses, step counts entered by workers into the website were downloaded and mean workday steps were compared using a seemingly unrelated regression. This model was employed to capture the contemporaneous correlation within individuals in the study across observed time periods. Results: The model predicts that the 36 subjects with complete information took an average 7460 steps in the baseline two week period. After phase I, statistically significance increases in steps (from baseline) were explained by age, working status (full or part time), occupation (academic or professional), and self reported public transport (PT) use (marginally significant). Full time workers walked more than part time workers by about 440 steps, professionals walked about 300 steps more than academics, and PT users walked about 400 steps more than non-PT users. The ability to differentiate steps after two weeks among participants suggests a differential affect of the program after only two weeks. On average participants increased steps from week two to four by about 525 steps, but regular auto users had nearly 750 steps less than non-auto users at week four. The effect of age was diminished in the 4th week of observation and accounted for 34 steps per year of age. In phase III, discriminating between participants became more difficult, with only age effects differentiating their increase over baseline. The marginal effect of age by phase III compared to phase I, increased from 36 to 50, suggesting a 14 step per year increase from the 2nd to 6th week. Discussion: The findings suggest that participants responded to the program at different rates, with uniformity of effect achieved by the 6th week. Participants increased steps, however a tapering off occurred over time. Age played the most consistent role in predicting steps over the program. PT use was associated with increased step counts, while Auto use was associated with decreased step counts.
Resumo:
Stigmergy is a biological term originally used when discussing insect or swarm behaviour, and describes a model supporting environment-based communication separating artefacts from agents. This phenomenon is demonstrated in the behavior of ants and their food foraging supported by pheromone trails, or similarly termites and their termite nest building process. What is interesting with this mechanism is that highly organized societies are formed without an apparent central management function. We see design features in Web sites that mimic stigmergic mechanisms as part of the User Interface and we have created generalizations of these patterns. Software development and Web site development techniques have evolved significantly over the past 20 years. Recent progress in this area proposes languages to model web applications to facilitate the nuances specific to these developments. These modeling languages provide a suitable framework for building reusable components encapsulating our design patterns of stigmergy. We hypothesize that incorporating stigmergy as a separate feature of a site’s primary function will ultimately lead to enhanced user coordination.
Resumo:
Cold-formed steel members are increasingly used as primary structural elements in the building industries around the world due to the availability of thin and high strength steels and advanced cold-forming technologies. Cold-formed lipped channel beams (LCB) are commonly used as flexural members such as floor joists and bearers. However, their shear capacities are determined based on conservative design rules. Current practice in flooring systems is to include openings in the web element of floor joists or bearers so that building services can be located within them. Shear behaviour of LCBs with web openings is more complicated while their shear strengths are considerably reduced by the presence of web openings. However, limited research has been undertaken on the shear behaviour and strength of LCBs with web openings. Hence a detailed experimental study involving 40 shear tests was undertaken to investigate the shear behaviour and strength of LCBs with web openings. Simply supported test specimens of LCBs with aspect ratios of 1.0 and 1.5 were loaded at midspan until failure. This paper presents the details of this experimental study and the results of their shear capacities and behavioural characteristics. Experimental results showed that the current design rules in cold-formed steel structures design codes are very conservative for the shear design of LCBs with web openings. Improved design equations have been proposed for the shear strength of LCBs with web openings based on the experimental results from this study.
Resumo:
Cold-formed steel Lipped Channel Beams (LCB) with web openings are commonly used as floor joists and bearers in building structures. The shear behaviour of these beams is more complicated and their shear capacities are considerably reduced by the presence of web openings. However, limited research has been undertaken on the shear behaviour and strength of LCBs with web openings. Hence a detailed numerical study was undertaken to investigate the shear behaviour and strength of LCBs with web openings. Finite element models of simply supported LCBs under a mid-span load with aspect ratios of 1.0 and 1.5 were developed and validated by comparing their results with test results. They were then used in a detailed parametric study to investigate the effects of various influential parameters. Experimental and numerical results showed that the current design rules in cold-formed steel structures design codes are very conservative. Improved design equations were therefore proposed for the shear strength of LCBs with web openings based on both experimental and numerical results. This paper presents the details of finite element modelling of LCBs with web openings, validation of finite element models, and the development of improved shear design rules. The proposed shear design rules in this paper can be considered for inclusion in the future versions of cold-formed steel design codes.
Resumo:
The rapid growth of visual information on Web has led to immense interest in multimedia information retrieval (MIR). While advancement in MIR systems has achieved some success in specific domains, particularly the content-based approaches, general Web users still struggle to find the images they want. Despite the success in content-based object recognition or concept extraction, the major problem in current Web image searching remains in the querying process. Since most online users only express their needs in semantic terms or objects, systems that utilize visual features (e.g., color or texture) to search images create a semantic gap which hinders general users from fully expressing their needs. In addition, query-by-example (QBE) retrieval imposes extra obstacles for exploratory search because users may not always have the representative image at hand or in mind when starting a search (i.e. the page zero problem). As a result, the majority of current online image search engines (e.g., Google, Yahoo, and Flickr) still primarily use textual queries to search. The problem with query-based retrieval systems is that they only capture users’ information need in terms of formal queries;; the implicit and abstract parts of users’ information needs are inevitably overlooked. Hence, users often struggle to formulate queries that best represent their needs, and some compromises have to be made. Studies of Web search logs suggest that multimedia searches are more difficult than textual Web searches, and Web image searching is the most difficult compared to video or audio searches. Hence, online users need to put in more effort when searching multimedia contents, especially for image searches. Most interactions in Web image searching occur during query reformulation. While log analysis provides intriguing views on how the majority of users search, their search needs or motivations are ultimately neglected. User studies on image searching have attempted to understand users’ search contexts in terms of users’ background (e.g., knowledge, profession, motivation for search and task types) and the search outcomes (e.g., use of retrieved images, search performance). However, these studies typically focused on particular domains with a selective group of professional users. General users’ Web image searching contexts and behaviors are little understood although they represent the majority of online image searching activities nowadays. We argue that only by understanding Web image users’ contexts can the current Web search engines further improve their usefulness and provide more efficient searches. In order to understand users’ search contexts, a user study was conducted based on university students’ Web image searching in News, Travel, and commercial Product domains. The three search domains were deliberately chosen to reflect image users’ interests in people, time, event, location, and objects. We investigated participants’ Web image searching behavior, with the focus on query reformulation and search strategies. Participants’ search contexts such as their search background, motivation for search, and search outcomes were gathered by questionnaires. The searching activity was recorded with participants’ think aloud data for analyzing significant search patterns. The relationships between participants’ search contexts and corresponding search strategies were discovered by Grounded Theory approach. Our key findings include the following aspects: - Effects of users' interactive intents on query reformulation patterns and search strategies - Effects of task domain on task specificity and task difficulty, as well as on some specific searching behaviors - Effects of searching experience on result expansion strategies A contextual image searching model was constructed based on these findings. The model helped us understand Web image searching from user perspective, and introduced a context-aware searching paradigm for current retrieval systems. A query recommendation tool was also developed to demonstrate how users’ query reformulation contexts can potentially contribute to more efficient searching.