921 resultados para Opportunity discovery and exploitation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been recognised in current literature that, in general, Australia’s population is ageing and that older people are increasingly choosing to continue to live in the community in their own homes for as long as possible. Such factors of social change are expected to lead to larger numbers of older people requiring community care services for longer periods. Despite this, there is little information available in the literature on the perceptions and experiences of older people regarding community-based care and support. This study explores the lived experience of a small group of older people living in South East Queensland who were receiving a level of care consistent with the Community Aged Care Package (CACP). It also sought to examine the impact and meaning of that care on the older person’s overall lifestyle, autonomy, and personal satisfaction. In-depth interviews were undertaken with these older people, and were analysed using Heidegger’s interpretive hermeneutical phenomenological approach. Shared narratives were then explored using Ricoeur’s narrative analysis framework. In order to sensitise the researcher to the unconscious or symbolic aspects of the care experience, Wolfensberger’s social role valorization theory (SRV) was also utilised during a third phase of analysis. Methodological rigour was strengthened within this study through the use of reflexivity and an in-depth member check discussion that was conducted with each participant. The interviews revealed there were significant differences in expectations, understanding, and perceptions between older people and their carers or service providers. The older person perceived care primarily in relational terms, and clearly preferred active participation in their care and a consistent relationship with a primary carer. Older people also sought to maintain their sense of autonomy, lifestyle, home environment, routines, and relationships, as closely as possible to those that existed prior to their requiring assistance. However, these expectations were not always supported by the care model. On the whole, service providers did not always understand what older people perceived was important within the care context. Carers seldom looked beyond the provision of assistance with specific daily tasks to consider the real impact of care on the older person. The study identified that older people reported a range of experiences when receiving care in their own homes. While some developed healthy and supportive connections with their carers, others experienced ageism, abuse, and exploitation. Unsatisfactory interactions at times resulted in a loss, to varying degrees, of their independence, their possessions, and their connectedness with others. There is therefore a need for service providers to pay more attention to the perceptions and self-perceived needs of older people, to avoid unintended or unnecessary negative impacts occurring within care provision. The study provides valuable information regarding the older person’s experience that will assist in supporting the further development and improvement of this model of care. It is proposed that these insights will enable CACPs to cater more closely to the actual needs and preferences of older people, and to avoid causing preventable harm to care recipients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Link the Wiki track at INEX 2008 offered two tasks, file-to-file link discovery and anchor-to-BEP link discovery. In the former 6600 topics were used and in the latter 50 were used. Manual assessment of the anchor-to-BEP runs was performed using a tool developed for the purpose. Runs were evaluated using standard precision & recall measures such as MAP and precision / recall graphs. 10 groups participated and the approaches they took are discussed. Final evaluation results for all runs are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Design Science Research (DSR) has emerged as an important approach in Information Systems (IS) research. However, DSR is still in its genesis and has yet to achieve consensus on even the fundamentals, such as what methodology / approach to use for DSR. While there has been much effort to establish DSR methodologies, a complete, holistic and validated approach for the conduct of DSR to guide IS researcher (especially novice researchers) is yet to be established. Alturki et al. (2011) present a DSR ‘Roadmap’, making the claim that it is a complete and comprehensive guide for conducting DSR. This paper aims to further assess this Roadmap, by positioning it against the ‘Idealized Model for Theory Development’ (IM4TD) (Fischer & Gregor 2011). The IM4TD highlights the role of discovery and justification and forms of reasoning to progress in theory development. Fischer and Gregor (2011) have applied IM4TD’s hypothetico-deductive method to analyze DSR methodologies, which is adopted in this study to deductively validate the Alturki et al. (2011) Roadmap. The results suggest that the Roadmap adheres to the IM4TD, is reasonably complete, overcomes most shortcomings identified in other DSR methodologies and also highlights valuable refinements that should be considered within the IM4TD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Drawing on the work of Ian Hunter the authors argue that literary education continues a tradition of circularity of argument derived from the humanities. They propose that the school subject, English in all of its apparently different historical manifestations focuses on the ideals of self-discovery and freedom of expression through literary study. The idea that literary interpretation or the production of specific readings is a skill that is taught in English classrooms challenges traditional understandings of literary study as a means for uncovering or revealing that which is hidden – be it the secrets of the text (or society or culture) or the secrets of the self – in order to come to a fuller realisation of culture and the self. Using examples from their previous work in developing activities for use with students in English classrooms the authors explore what it means to produce one’s ‘own reading’ of a text.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Topic modeling has been widely utilized in the fields of information retrieval, text mining, text classification etc. Most existing statistical topic modeling methods such as LDA and pLSA generate a term based representation to represent a topic by selecting single words from multinomial word distribution over this topic. There are two main shortcomings: firstly, popular or common words occur very often across different topics that bring ambiguity to understand topics; secondly, single words lack coherent semantic meaning to accurately represent topics. In order to overcome these problems, in this paper, we propose a two-stage model that combines text mining and pattern mining with statistical modeling to generate more discriminative and semantic rich topic representations. Experiments show that the optimized topic representations generated by the proposed methods outperform the typical statistical topic modeling method LDA in terms of accuracy and certainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Road surface skid resistance has been shown to have a strong relationship to road crash risk, however, applying the current method of using investigatory levels to identify crash prone roads is problematic as they may fail in identifying risky roads outside of the norm. The proposed method analyses a complex and formerly impenetrable volume of data from roads and crashes using data mining. This method rapidly identifies roads with elevated crash-rate, potentially due to skid resistance deficit, for investigation. A hypothetical skid resistance/crash risk curve is developed for each road segment, driven by the model deployed in a novel regression tree extrapolation method. The method potentially solves the problem of missing skid resistance values which occurs during network-wide crash analysis, and allows risk assessment of the major proportion of roads without skid resistance values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prior to the completion of the human genome project, the human genome was thought to have a greater number of genes as it seemed structurally and functionally more complex than other simpler organisms. This along with the belief of “one gene, one protein”, were demonstrated to be incorrect. The inequality in the ratio of gene to protein formation gave rise to the theory of alternative splicing (AS). AS is a mechanism by which one gene gives rise to multiple protein products. Numerous databases and online bioinformatic tools are available for the detection and analysis of AS. Bioinformatics provides an important approach to study mRNA and protein diversity by various tools such as expressed sequence tag (EST) sequences obtained from completely processed mRNA. Microarrays and deep sequencing approaches also aid in the detection of splicing events. Initially it was postulated that AS occurred only in about 5%; of all genes but was later found to be more abundant. Using bioinformatic approaches, the level of AS in human genes was found to be fairly high with 35-59%; of genes having at least one AS form. Our ability to determine and predict AS is important as disorders in splicing patterns may lead to abnormal splice variants resulting in genetic diseases. In addition, the diversity of proteins produced by AS poses a challenge for successful drug discovery and therefore a greater understanding of AS would be beneficial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Migraine is a common, heterogeneous and heritable neurological disorder. Its pathophysiology is incompletely understood, and its genetic influences at the population level are unknown. In a population-based genome-wide analysis including 5,122 migraineurs and 18,108 non-migraineurs, rs2651899 (1p36.32, PRDM16), rs10166942 (2q37.1, TRPM8) and rs11172113 (12q13.3, LRP1) were among the top seven associations (P < 5 × 10(-6)) with migraine. These SNPs were significant in a meta-analysis among three replication cohorts and met genome-wide significance in a meta-analysis combining the discovery and replication cohorts (rs2651899, odds ratio (OR) = 1.11, P = 3.8 × 10(-9); rs10166942, OR = 0.85, P = 5.5 × 10(-12); and rs11172113, OR = 0.90, P = 4.3 × 10(-9)). The associations at rs2651899 and rs10166942 were specific for migraine compared with non-migraine headache. None of the three SNP associations was preferential for migraine with aura or without aura, nor were any associations specific for migraine features. TRPM8 has been the focus of neuropathic pain models, whereas LRP1 modulates neuronal glutamate signaling, plausibly linking both genes to migraine pathophysiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collaborative contracting has emerged over the past 15 years as an innovative project delivery framework that is particularly suited to infrastructure projects. Australia leads the world in the development of project and program alliance approaches to collaborative delivery. These approaches are considered to promise superior project results. However, very little is known about the learning routines that are most widely used in support of collaborative projects in general and alliance projects in particular. The literature on absorptive capacity and dynamic capabilities indicates that such learning enhances project performance. The learning routines employed at corporate level during the operation of collaborative infrastructure projects in Australia were examined through a large survey conducted in 2013. This paper presents a descriptive summary of the preliminary findings. The survey captured the experiences of 320 practitioners of collaborative construction projects, including public and private sector clients, contractors, consultants and suppliers (three per cent of projects were located in New Zealand, but for brevity’s sake the sample is referred to as Australian). The majority of projects identified used alliances (78.6%); whilst 9% used Early Contractor Involvement (ECI) contracts and 2.7% used Early Tender Involvement contracts, which are ‘slimmer’ types of collaborative contract. The remaining 9.7% of respondents used traditional contracts that employed some collaborative elements. The majority of projects were delivered for public sector clients (86.3%), and/or clients experienced with asset procurement (89.6%). All of the projects delivered infrastructure assets; one third in the road sector, one third in the water sector, one fifth in the rail sector, and the rest spread across energy, building and mining. Learning routines were explored within three interconnected phases: knowledge exploration, transformation and exploitation. The results show that explorative and exploitative learning routines were applied to a similar extent. Transformative routines were applied to a relatively low extent. It was also found that the most highly applied routine is ‘regularly applying new knowledge to collaborative projects’; and the least popular routine was ‘staff incentives to encourage information sharing about collaborative projects’. Future research planned by the authors will examine the impact of these routines on project performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Term-based approaches can extract many features in text documents, but most include noise. Many popular text-mining strategies have been adapted to reduce noisy information from extracted features; however, text-mining techniques suffer from low frequency. The key issue is how to discover relevance features in text documents to fulfil user information needs. To address this issue, we propose a new method to extract specific features from user relevance feedback. The proposed approach includes two stages. The first stage extracts topics (or patterns) from text documents to focus on interesting topics. In the second stage, topics are deployed to lower level terms to address the low-frequency problem and find specific terms. The specific terms are determined based on their appearances in relevance feedback and their distribution in topics or high-level patterns. We test our proposed method with extensive experiments in the Reuters Corpus Volume 1 dataset and TREC topics. Results show that our proposed approach significantly outperforms the state-of-the-art models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the fourth edition of New Media: An Introduction, with the previous editions being published by Oxford University Press in 2002, 2005 and 2008. As the first edition of the book published in the 2010s, every chapter has been comprehensively revised, and there are new chapters on: • Online News and the Future of Journalism (Chapter 7) • New Media and the Transformation of Higher Education (Chapter 10) • Online Activism and Networked Politics (Chapter 12). It has retained popular features of the third edition, including the twenty key concepts in new media (Chapter 2) and illustrative case studies to assist with teaching new media. The case studies in the book cover: the global internet; Wikipedia; transmedia storytelling; Media Studies 2.0; the games industry and exploitation; video games and violence; WikiLeaks; the innovator’s dilemma; massive open online courses (MOOCs); Creative Commons; the Barack Obama Presidential campaigns; and the Arab Spring. Several major changes in the media environment since the publication of the third edition stand out. Of particular importance has been the rise of social media platforms such as Facebook, Twitter and YouTube, which draw out even more strongly the features of the internet as networked and participatory media, with a range of implications across the economy, society and culture. In addition, the political implications of new media have become more apparent with a range of social media-based political campaigns, from Barack Obama’s successful Presidential election campaigns to the Occupy movements and the Arab Spring. At the same time, the subsequent developments of politics in these and other cases has drawn attention to the limitations of thinking about the politics or the public sphere in technologically determinist ways. When the first edition of New Media was published in 2002, the concept of new media was seen as being largely about the internet as it was accessed from personal computers. The subsequent decade has seen a proliferation of platforms and devices: we now access media in all forms from our phones and other mobile platforms, therefore we seen television and the internet increasingly converging, and we see a growing uncoupling of digital media content and delivery platforms. While this has a range of implications for media law and policy, from convergent media policy to copyright reform, governments and policy-makers are struggling to adapt to such seismic shifts from mass communications media to convergent social media. The internet is no longer primarily a Western-based medium. Two-thirds of the world’s internet users are now outside of Europe and North America; three-quarters of internet users use languages other than English; and three-quarters of the world’s mobile cellular phone subscriptions are in developing nations. It is also apparent that conducting discussions about how to develop new media technologies and discussions about their cultural and creative content can no longer be separated. Discussions of broadband strategies and the knowledge economy need to be increasingly joined with those concerning the creative industries and the creative economy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new era of cyber warfare has appeared on the horizon with the discovery and detection of Stuxnet. Allegedly planned, designed, and created by the United States and Israel, Stuxnet is considered the first known cyber weapon to attack an adversary state. Stuxnet's discovery put a lot of attention on the outdated and obsolete security of critical infrastructure. It became very apparent that electronic devices that are used to control and operate critical infrastructure like programmable logic controllers (PLCs) or supervisory control and data acquisition (SCADA) systems lack very basic security and protection measures. Part of that is due to the fact that when these devices were designed, the idea of exposing them to the Internet was not in mind. However, now with this exposure, these devices and systems are considered easy prey to adversaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New advancement in genomics, proteomics, and metabonomics created significant excitement about the use of these relatively new technologies in drug design, discovery, development, and molecular-targeted therapeutics by identifying new drug targets and better tools for safety and efficacy studies in preclinical and clinical stages of drug development as well as diagnostics. In this chapter, we will briefly discuss the application of genomics, proteomics, and metabonomics in drug discovery and development

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mass spectrometric analysis of the low-molecular weight (LMW) range of the serum/plasma proteome is revealing the existence of large numbers of previously unknown peptides and protein fragments predicted to be derived from low- abundance proteins. This raises the question of why such low abundance molecules would be retained at detectable levels in the circulation, instead of being rapidly cleared and excreted. Theoretical models of biomarker production and association with serum carrier proteins have been developed to elucidate the mechanisms governing biomarker half-life in the bloodstream. These models predict that the vast majority of LMW biomarkers exist in association with circulating high molecular mass carrier proteins. Moreover, the total serum/ plasma concentration of the biomarker is largely determined by the clearance rate of the carrier protein, not the free-phase biomarker clearance itself. These predictions have been verified experimentally using molecular mass fractionation of human serum before mass spectrometry sequence analysis. These principles have profound implications for biomarker discovery and measurement.