223 resultados para keyword
Resumo:
Children’s Literature Digital Resources incorporates primary texts published from white settlement to 1945, including children’s and young adult fiction, poetry, short stories, and picture books. This collection is supported by selected secondary material. The objective is to provide a centralised access point for information about Australian children's literature and writers and a growing body of full-text primary resources. Four key aims are: * To establish an important digital facility for research, teaching, and information provision around Australian children’s literature; * To provide access to a wide range of high-quality full-text data, both primary and secondary resources; * To provide access to essential library and research information infrastructure and facilities for established and emerging researchers in the fields of Humanities and Education; To enable research while preserving important heritage material. The collection contains texts digitised for AustLit through cooperation with various Australian libraries. The collection includes children’s and young adult fiction, poetry, picture books, short stories, and critical articles relating to relevant primary texts. Authors of primary sources include Irene Cheyne, E. W. Cole, Richard Rowe, Lillian M. Pyke, and Dorothy Wall. Secondary sources include critical works by Clare Bradford, Heather Scutter, Kerry White, Sharyn Pearce, and Marcie Muir. These full-text materials are keyword searchable (both within individual texts and across the CLDR corpus) and can be downloaded for research purposes. As well as digitising primary and secondary material, the project locates and provides pathways to existing online resources or internet publications to enhance AustLit's Children's Literature subset. These resources include both primary and secondary texts.
Resumo:
This article considers recent cases on guarantees of business loans to identify the lending practices that led the court to set aside the guarantee as against the creditor on the basis that the creditor had engaged in unconscionable conduct. It also explores the role of industry codes of practice in preventing unconscionable conduct, including whether there is a correlation between commitment to an industry code and higher standards of lending practices; whether compliance with an industry code would have produced different outcomes in the cases considered; and whether lenders need to do more than comply with an industry code to ensure their practices are fair and reasonable.
Resumo:
Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.
Resumo:
Equity and Trusts : in Principle, 3rd edition is updated and revised throughout. It addresses the principles of equity and trusts and provides a clear analysis of this area.
Resumo:
BACKGROUND: Ankle joint equinus, or restricted dorsiflexion range of motion (ROM), has been linked to a range of pathologies of relevance to clinical practitioners. This systematic review and meta-analysis investigated the effects of conservative interventions on ankle joint ROM in healthy individuals and athletic populations. METHODS: Keyword searches of Embase Medline Cochrane and CINAHL databases were performed with the final search being run in August 2013. Studies were eligible for inclusion if they assessed the effect of a non-surgical intervention on ankle joint dorsiflexion in healthy populations. Studies were quality rated using a standard quality assessment scale. Standardised mean differences (SMDs) and 95% confidence intervals (CIs) were calculated and results were pooled where study methods were homogenous. RESULTS: Twenty-three studies met eligibility criteria, with a total of 734 study participants. Results suggest that there is some evidence to support the efficacy of static stretching alone (SMDs: range 0.70 to 1.69) and static stretching in combination with ultrasound (SMDs: range 0.91 to 0.95), diathermy (SMD 1.12), diathermy and ice (SMD 1.16), heel raise exercises (SMDs: range 0.70 to 0.77), superficial moist heat (SMDs: range 0.65 to 0.84) and warm up (SMD 0.87) in improving ankle joint dorsiflexion ROM. CONCLUSIONS: Some evidence exists to support the efficacy of stretching alone and stretching in combination with other therapies in increasing ankle joint ROM in healthy individuals. There is a paucity of quality evidence to support the efficacy of other non-surgical interventions, thus further research in this area is warranted.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Interdisciplinary research is often funded by national government initiatives or large corporate sponsorship, and as such, demands periodic reporting on the use of those funds. For reasons of accountability, governance and communication to the tax payer, knowledge of the outcomes of the research need to be measured and understood. The interdisciplinary approach to research raises many challenges for impact reporting. This presentation will consider what are the best practice workflow models and methodologies.Novel methodologies that can be added to the usual metrics of academic publications include analysis of percentage share of total publications in a subject or keyword field, calculating most cited publication in a key phrase category, analysis of who has cited or reviewed the work, and benchmarking of this data against others in that same category. At QUT, interest in how collaborative networking is trending in a research theme has led to the creation of some useful co-authorship graphs that demonstrate the network positions of authors and the strength of their scientific collaborations within a group. The scale of international collaborations is also worth including in the assessment. However, despite all of the tools and techniques available, the most useful way a researcher can help themselves and the process is to set up and maintain their researcher identifier and profile.
Resumo:
The support for typically out-of-vocabulary query terms such as names, acronyms, and foreign words is an important requirement of many speech indexing applications. However, to date many unrestricted vocabulary indexing systems have struggled to provide a balance between good detection rate and fast query speeds. This paper presents a fast and accurate unrestricted vocabulary speech indexing technique named Dynamic Match Lattice Spotting (DMLS). The proposed method augments the conventional lattice spotting technique with dynamic sequence matching, together with a number of other novel algorithmic enhancements, to obtain a system that is capable of searching hours of speech in seconds while maintaining excellent detection performance
Resumo:
This paper describes an interactive installation work set in a large dome space. The installation is an audio and physical re-rendition of an interactive writing work. In the original work, the user interacted via keyboard and screen while online. This rendition of the work retains the online interaction, but also places the interaction within a physical space, where the main 'conversation' takes place by the participant-audience speaking through microphones and listening through headphones. The work now also includes voice and SMS input, using speech-to-text and text-to-speech conversion technologies, and audio and displayed text for output. These additions allow the participant-audience to co-author the work while they participate in audible conversation with keyword-triggering characters (bots). Communication in the space can be person-to-computer via microphone, keyboard, and phone; person-to-person via machine and within the physical space; computer-to- computer; and computer-to-person via audio and projected text.
Resumo:
Background Cancer monitoring and prevention relies on the critical aspect of timely notification of cancer cases. However, the abstraction and classification of cancer from the free-text of pathology reports and other relevant documents, such as death certificates, exist as complex and time-consuming activities. Aims In this paper, approaches for the automatic detection of notifiable cancer cases as the cause of death from free-text death certificates supplied to Cancer Registries are investigated. Method A number of machine learning classifiers were studied. Features were extracted using natural language techniques and the Medtex toolkit. The numerous features encompassed stemmed words, bi-grams, and concepts from the SNOMED CT medical terminology. The baseline consisted of a keyword spotter using keywords extracted from the long description of ICD-10 cancer related codes. Results Death certificates with notifiable cancer listed as the cause of death can be effectively identified with the methods studied in this paper. A Support Vector Machine (SVM) classifier achieved best performance with an overall F-measure of 0.9866 when evaluated on a set of 5,000 free-text death certificates using the token stem feature set. The SNOMED CT concept plus token stem feature set reached the lowest variance (0.0032) and false negative rate (0.0297) while achieving an F-measure of 0.9864. The SVM classifier accounts for the first 18 of the top 40 evaluated runs, and entails the most robust classifier with a variance of 0.001141, half the variance of the other classifiers. Conclusion The selection of features significantly produced the most influences on the performance of the classifiers, although the type of classifier employed also affects performance. In contrast, the feature weighting schema created a negligible effect on performance. Specifically, it is found that stemmed tokens with or without SNOMED CT concepts create the most effective feature when combined with an SVM classifier.
Resumo:
This paper gives an overview of the INEX 2008 Ad Hoc Track. The main goals of the Ad Hoc Track were two-fold. The first goal was to investigate the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. This is a continuation of INEX 2007 and, for this reason, the retrieval results are liberalized to arbitrary passages and measures were chosen to fairly compare systems retrieving elements, ranges of elements, and arbitrary passages. The second goal was to compare focused retrieval to article retrieval more directly than in earlier years. For this reason, standard document retrieval rankings have been derived from all runs, and evaluated with standard measures. In addition, a set of queries targeting Wikipedia have been derived from a proxy log, and the runs are also evaluated against the clicked Wikipedia pages. The INEX 2008 Ad Hoc Track featured three tasks: For the Focused Task a ranked-list of nonoverlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the results for the three tasks, and examine the relative effectiveness of element and passage retrieval. This is examined in the context of content only (CO, or Keyword) search as well as content and structure (CAS, or structured) search. Finally, we look at the ability of focused retrieval techniques to rank articles, using standard document retrieval techniques, both against the judged topics as well as against queries and clicks from a proxy log.
Resumo:
Immigrant entrepreneurs tend to start businesses within their ethnic enclave (EE), as it is an integral part of their social and cultural context and the location where ethnic resources reside (Logan, Alba, & Stults, 2003). Ethnic enclaves can be seen as a form of geographic cluster, China Towns are exemplar EEs, easily identified by the clustering of Chinese restaurants and other ethnic businesses in one central location. Studies on EE thus far have neglected the life cycles stages of EE and its impact on the business experiences of the entrepreneurs. In this paper, we track the formation, growth and decline of an EE. We argue that EE is a special industrial cluster and as such it follows the growth conditions proposed by the cluster life cycle theory (Menzel & Fornahl, 2009). We report a mixed method study of Chinese Restaurants in South East Queensland. Based on multiple sources of data, we concluded that changes in government policies leading to a sharp increase of immigrant numbers from a distinctive culture group can lead to the initiation and growth of the EE. Continuous incoming of new immigrants and increase competition within the cluster mark the mature stage of the EE, making the growth condition more favourable “inside” the cluster. A decline in new immigrants from the same ethnic group and the increased competition within the EE may eventually lead to the decline of such an industrial cluster, thus providing more favorable condition for growth of business outside the cluster.
Resumo:
This study reports the construction and reconstruction of identities of new and existing employees during a significant transition phase of a nuclear engineering organization. We followed a group of new and existing employees over the period of three years, during which the organization constructed a greenfield nuclear facility with new generational technologies whilst in parallel, decommissioned the older reactor. This change led to the transfer and integration of existing trade-based employees with the newly recruited, primarily university educated graduates in the new site. Three waves of interview data were collected, in conjunction with the cognitive mapping of social grouping and photo elicitation portrayed the stories of different group of employees who either succeeded or failed at embracing their new professional identity. In contrast with the new recruits who constructed new identities as they join this organization, we identify and report on the number of enabling and disabling factors that influence the process of professional identity construction and reconstruction during gamma change.
Resumo:
A firm’s business model (BM) is an important driver of its relative performance. Constructive adaptation to elements of the BM can therefore sustain the position in light of changing conditions. This study takes a configurational approach to understanding drivers of business model adaptation (BMA) in new ventures. We investigate the effect of human capital, social capital, and technological environment on BMA. We find that a universal, direct effects, analysis can provide useful information, but also risks painting a distorted picture. Contingent, two-way interactions add further explanatory power, but configurational models combining elements of all three (internal resource, external activities, environment) are superior.
Resumo:
Due to the availability of huge number of web services, finding an appropriate Web service according to the requirements of a service consumer is still a challenge. Moreover, sometimes a single web service is unable to fully satisfy the requirements of the service consumer. In such cases, combinations of multiple inter-related web services can be utilised. This paper proposes a method that first utilises a semantic kernel model to find related services and then models these related Web services as nodes of a graph. An all-pair shortest-path algorithm is applied to find the best compositions of Web services that are semantically related to the service consumer requirement. The recommendation of individual and composite Web services composition for a service request is finally made. Empirical evaluation confirms that the proposed method significantly improves the accuracy of service discovery in comparison to traditional keyword-based discovery methods.