980 resultados para Accessible Web
Resumo:
Abstract: LiteSteel beam (LSB) is a new cold-formed steel hollow flange channel section produced using a simultaneous cold-forming and dual electric resistance welding process. It is commonly used as floor joists and bearers with web openings in residential, industrial and commercial buildings. Their shear strengths are considerably reduced when web openings are included for the purpose of locating building services. A cost effective method of eliminating the detrimental effects of a large web opening is to attach suitable stiffeners around the web openings of LSBs. Experimental and numerical studies were undertaken to investigate the shear behaviour and strength of LSBs with circular web openings reinforced using plate, stud, transverse and sleeve stiffeners with varying sizes and thicknesses. Both welding and varying screw-fastening arrangements were used to attach these stiffeners to the web of LSBs. Finite element models of LSBs with stiffened web openings in shear were developed to simulate their shear behaviour and strength of LSBs. They were then validated by comparing the results with experimental test results and used in a detailed parametric study. These studies have shown that plate stiffeners were the most suitable, however, their use based on the current American standards was found to be inadequate. Suitable screw-fastened plate stiffener arrangements with optimum thicknesses have been proposed for LSBs with web openings to restore their original shear capacity. This paper presents the details of the numerical study and the results.
Resumo:
Ghrelin is a multifunctional hormone, with roles in stimulating appetite and regulating energy balance, insulin secretion and glucose homeostasis. The ghrelin gene locus (GHRL) is highly complex and gives rise to a range of novel transcripts derived from alternative first exons and internally spliced exons. The wild-type transcript encodes a 117 amino acid preprohormone that is processed to yield the 28 amino acid peptide ghrelin. Here, we identified insulin-responsive transcription corresponding to cryptic exons in intron 2 of the human ghrelin gene. A transcript, termed in2c-ghrelin (intron 2-cryptic), was cloned from the testis and the LNCaP prostate cancer cell line. This transcript may encode an 83 AA preproghrelin isoform that codes for the ghrelin, but not obestatin. It is expressed in a limited number of normal tissues and in tumours of the prostate, testis, breast and ovary. Finally, we confirmed that in2c-ghrelin transcript expression, as well as the recently described in1-ghrelin transcript, is significantly upregulated by insulin in cultured prostate cancer cells. Metabolic syndrome and hyperinsulinaemia has been associated with prostate cancer risk and progression. This may be particularly significant after androgen deprivation therapy for prostate cancer, which induces hyperinsulinaemia, and this could contribute to castrate resistant prostate cancer growth. We have previously demonstrated that ghrelin stimulates prostate cancer cell line proliferation in vitro. This study is the first description of insulin regulation of a ghrelin transcript in cancer, and should provide further impetus for studies into the expression, regulation and function of ghrelin gene products.
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.
Resumo:
Many existing information retrieval models do not explicitly take into account in- formation about word associations. Our approach makes use of rst and second order relationships found in natural language, known as syntagmatic and paradigmatic associ- ations, respectively. This is achieved by using a formal model of word meaning within the query expansion process. On ad hoc retrieval, our approach achieves statistically sig- ni cant improvements in MAP (0.158) and P@20 (0.396) over our baseline model. The ERR@20 and nDCG@20 of our system was 0.249 and 0.192 respectively. Our results and discussion suggest that information about both syntagamtic and paradigmatic associa- tions can assist with improving retrieval eectiveness on ad hoc retrieval.
Resumo:
The LiteSteel Beam (LSB) is a new hollow flange section developed in Australia with a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. The LSB is subjected to a relatively new Lateral Distortional Buckling (LDB) mode when used as flexural members. Unlike the commonly observed lateral torsional buckling, lateral distortional buckling of LSBs is characterised by cross sectional change due to web distortion. Lateral distortional buckling causes significant moment capacity reduction for LSBs with intermediate spans. Therefore a detailed investigation was undertaken to determine the methods of reducing the effects of lateral distortional buckling in LSB flexural members. For this purpose the use of web stiffeners was investigated using finite element analyses of LSBs with different web stiffener spacing and sizes. It was found that the use of 5 mm steel plate stiffeners welded or screwed to the inner faces of the top and bottom flanges at third span points considerably reduced the lateral distortional buckling effects in LSBs. Suitable design rules were then developed to calculate the enhanced elastic lateral distortional buckling moments and the higher ultimate moment capacities of LSBs with the chosen web stiffener arrangement. This paper presents the details of this investigation and the results.
Resumo:
The SimCalc Vision and Contributions Advances in Mathematics Education 2013, pp 419-436 Modeling as a Means for Making Powerful Ideas Accessible to Children at an Early Age Richard Lesh, Lyn English, Serife Sevis, Chanda Riggs … show all 4 hide » Look Inside » Get Access Abstract In modern societies in the 21st century, significant changes have been occurring in the kinds of “mathematical thinking” that are needed outside of school. Even in the case of primary school children (grades K-2), children not only encounter situations where numbers refer to sets of discrete objects that can be counted. Numbers also are used to describe situations that involve continuous quantities (inches, feet, pounds, etc.), signed quantities, quantities that have both magnitude and direction, locations (coordinates, or ordinal quantities), transformations (actions), accumulating quantities, continually changing quantities, and other kinds of mathematical objects. Furthermore, if we ask, what kind of situations can children use numbers to describe? rather than restricting attention to situations where children should be able to calculate correctly, then this study shows that average ability children in grades K-2 are (and need to be) able to productively mathematize situations that involve far more than simple counts. Similarly, whereas nearly the entire K-16 mathematics curriculum is restricted to situations that can be mathematized using a single input-output rule going in one direction, even the lives of primary school children are filled with situations that involve several interacting actions—and which involve feedback loops, second-order effects, and issues such as maximization, minimization, or stabilizations (which, many years ago, needed to be postponed until students had been introduced to calculus). …This brief paper demonstrates that, if children’s stories are used to introduce simulations of “real life” problem solving situations, then average ability primary school children are quite capable of dealing productively with 60-minute problems that involve (a) many kinds of quantities in addition to “counts,” (b) integrated collections of concepts associated with a variety of textbook topic areas, (c) interactions among several different actors, and (d) issues such as maximization, minimization, and stabilization.
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
This paper reports research into teacher-‐librarians’ perceptions of using social media and Web 2.0 in teaching and learning. A pilot study was conducted with teacher-‐librarians in five government schools and five private schools in southeast Queensland. The findings revealed that there was a strong digital divide between government schools and private schools, with government schools suffering severe restrictions on the use of social media and Web 2.0, leading to an unsophisticated use of these technologies. It is argued that internet ‘over-‐ blocking’ may lead to government school students not being empowered to manage risks in an open internet environment. Furthermore, their use of information for academic and recreational learning may be compromised. This has implications particularly for low socioeconomic students, leading to further inequity in the process and outcomes of Australian education.
Resumo:
This paper presents the details of numerical studies on the shear behaviour and strength of lipped channel beams (LCBs) with stiffened web openings. Over the last couple of decades, cold-formed steel beams have been used extensively in residential, industrial and commercial buildings as primary load bearing structural components. Their shear strengths are considerably reduced when web openings are included for the purpose of locating building services. Our research has shown that shear strengths of LCBs were reduced by up to 70% due to the inclusion of web openings. Hence there is a need to improve the shear strengths of LCBs with web openings. A cost effective way to improve the detrimental effects of a large web opening is to attach appropriate stiffeners around the web openings in order to restore the original shear strength and stiffness of LCBs. Hence numerical studies were undertaken to investigate the shear strengths of LCBs with stiffened web openings. In this research, finite element models of LCBs with stiffened web openings in shear were developed to simulate the shear behaviour and strength of LCBs. Various stiffening methods using plate and LCB stud stiffeners attached to LCBs using screw-fastening were attempted. The developed models were then validated by comparing their results with experimental results and used in parametric studies. Both finite element analysis and experimental results showed that the stiffening arrangements recommended by past re-search for cold-formed steel channel beams are not adequate to restore the shear strengths of LCBs with web openings. Therefore new stiffener arrangements were proposed for LCBs with web openings based on experimental and finite element analysis results. This paper presents the details of finite element models and analyses used in this research and the results including the recommended stiffener arrangements.
Resumo:
Purpose - Researchers debate whether tacit knowledge sharing through Information Technology (IT) is actually possible. However, with the advent of social web tools, it has been argued that most shortcomings of tacit knowledge sharing are likely to disappear. This paper has two purposes: firstly, to demonstrate the existing debates in the literature regarding tacit knowledge sharing using IT, and secondly, to identify key research gaps that lay the foundations for future research into tacit knowledge sharing using social web. Design/methodology/approach - This paper reviews current literature on IT-mediated tacit knowledge sharing and opens a discussion on tacit knowledge sharing through the use of social web. Findings - First, the existing schools of thoughts in regards to IT ability for tacit knowledge sharing are introduced. Next, difficulties of sharing tacit knowledge through the use of IT are discussed. Then, potentials and pitfalls of social web tools are presented. Finally, the paper concludes that whilst there are significant theoretical arguments supporting that the social web facilitates tacit knowledge sharing there is a lack of empirical evidence to support these arguments and further work is required. Research limitations/implications - The limitations of the review includes: covering only papers that were published in English, issues of access to full texts of some resources, possibility of missing some resources due to search strings used or limited coverage of databases searched. Originality/value - The paper contributes to the fast growing literature on the intersection of KM and IT particularly by focusing on tacit knowledge sharing in social media space. The paper highlights the need for further studies in this area by discussing the current situation in the literature and disclosing the emerging questions and gaps for future studies.
Resumo:
Building and maintaining software are not easy tasks. However, thanks to advances in web technologies, a new paradigm is emerging in software development. The Service Oriented Architecture (SOA) is a relatively new approach that helps bridge the gap between business and IT and also helps systems remain exible. However, there are still several challenges with SOA. As the number of available services grows, developers are faced with the problem of discovering the services they need. Public service repositories such as Programmable Web provide only limited search capabilities. Several mechanisms have been proposed to improve web service discovery by using semantics. However, most of these require manually tagging the services with concepts in an ontology. Adding semantic annotations is a non-trivial process that requires a certain skill-set from the annotator and also the availability of domain ontologies that include the concepts related to the topics of the service. These issues have prevented these mechanisms becoming widespread. This thesis focuses on two main problems. First, to avoid the overhead of manually adding semantics to web services, several automatic methods to include semantics in the discovery process are explored. Although experimentation with some of these strategies has been conducted in the past, the results reported in the literature are mixed. Second, Wikipedia is explored as a general-purpose ontology. The benefit of using it as an ontology is assessed by comparing these semantics-based methods to classic term-based information retrieval approaches. The contribution of this research is significant because, to the best of our knowledge, a comprehensive analysis of the impact of using Wikipedia as a source of semantics in web service discovery does not exist. The main output of this research is a web service discovery engine that implements these methods and a comprehensive analysis of the benefits and trade-offs of these semantics-based discovery approaches.
Resumo:
Purpose – The article aims to review a university course, offered to students in both Australia and Germany, to encourage them to learn about designing, implementing, marketing and evaluating information programs and services in order to build active and engaged communities. The concepts and processes of Web 2.0 technologies come together in the learning activities, with students establishing their own personal learning networks (PLNs). Design/methodology/approach – The case study examines the principles of learning and teaching that underpin the course and presents the students' own experiences of the challenges they faced as they explored the interactive, participative and collaborative dimensions of the web. Findings – The online format of the course and the philosophy of learning through play provided students with a safe and supportive environment for them to move outside of their comfort zones, to be creative, to experiment and to develop their professional personas. Reflection on learning was a key component that stressed the value of reflective practice in assisting library and information science (LIS) professionals to adapt confidently to the rapidly changing work environment. Originality/value – This study provides insights into the opportunities for LIS courses to work across geographical boundaries, to allow students to critically appraise library practice in different contexts and to become active participants in wider professional networks.
Resumo:
Introduction: Participants may respond to phases of a workplace walking program at different rates. This study evaluated the factors that contribute to the number of steps through phases of the program. The intervention was automated through a web-based program designed to increase workday walking. Methods: The study reviewed independent variable influences throughout phases I–III. A convenience sample of university workers (n=56; 43.6±1.7 years; BMI 27.44±.2.15 kg/m2; 48 female) were recruited at worksites in Australia. These workers were given a pedometer (Yamax SW 200) and access to the website program. For analyses, step counts entered by workers into the website were downloaded and mean workday steps were compared using a seemingly unrelated regression. This model was employed to capture the contemporaneous correlation within individuals in the study across observed time periods. Results: The model predicts that the 36 subjects with complete information took an average 7460 steps in the baseline two week period. After phase I, statistically significance increases in steps (from baseline) were explained by age, working status (full or part time), occupation (academic or professional), and self reported public transport (PT) use (marginally significant). Full time workers walked more than part time workers by about 440 steps, professionals walked about 300 steps more than academics, and PT users walked about 400 steps more than non-PT users. The ability to differentiate steps after two weeks among participants suggests a differential affect of the program after only two weeks. On average participants increased steps from week two to four by about 525 steps, but regular auto users had nearly 750 steps less than non-auto users at week four. The effect of age was diminished in the 4th week of observation and accounted for 34 steps per year of age. In phase III, discriminating between participants became more difficult, with only age effects differentiating their increase over baseline. The marginal effect of age by phase III compared to phase I, increased from 36 to 50, suggesting a 14 step per year increase from the 2nd to 6th week. Discussion: The findings suggest that participants responded to the program at different rates, with uniformity of effect achieved by the 6th week. Participants increased steps, however a tapering off occurred over time. Age played the most consistent role in predicting steps over the program. PT use was associated with increased step counts, while Auto use was associated with decreased step counts.