424 resultados para Blog datasets


Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Indigenous patients with acute coronary syndromes represent a high-risk group. There are however few contemporary datasets addressing differences in the presentation and management of Indigenous and non-Indigenous patients with chest pain. METHODS: The Heart Protection Project, is a multicentre retrospective audit of consecutive medical records from patients presenting with chest pain. Patients were identified as Indigenous or non-Indigenous, and time to presentation and cardiac investigations as well as rates of cardiac investigations and procedures were compared between the two groups. RESULTS: Of the 2380 patients included, 199 (8.4%) identified as Indigenous, and 2174 (91.6%) as non-Indigenous. Indigenous patients were younger, had higher rates hyperlipidaemia, diabetes, smoking, known coronary artery disease and a lower rate of prior PCI; and were significantly less likely to have private health insurance, be admitted to an interventional facility or to have a cardiologist as primary physician. Following adjustment for difference in baseline characteristics, Indigenous patients had comparable rates of cardiac investigations and delay times to presentation and investigations. CONCLUSIONS: Although the Indigenous population was identified as a high-risk group, in this analysis of selected Australian hospitals there were no significant differences in treatment or management of Indigenous patients in comparison to non-Indigenous.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An elective internship unit as part of a work integrated learning program in a business faculty is presented as a case study. In the unit, students complete a minimum of 120 hours work placement over the course of a 13 week semester. The students are majoring in advertising, marketing, or public relations and are placed in corporations, government agencies, and not for profit organisations. To support and scaffold the students’ learning in the work environment, a range of classroom and online learning activities are part of the unit. Classroom activities include an introductory workshop to prepare students for placement, an industry panel, and interview workshop. These are delivered as three workshops across the semester. Prior to commencing their placement, students complete a suite of online learning modules. The Work Placement Preparation Program assists students in securing obtaining a placement and make a successful transition to the work environment. It provides an opportunity for students to source possible work placement sites, prepare competitive applications, develop and rehearse interview skills, deal with workplace issues, and use a student ePortfolio to reflect on their skills and achievements. Students contribute to a reflective blog throughout their placement, with feedback from academic supervisors throughout the placement. The completion of the online learning modules and contribution to a reflective blog are assessed as part of the unit. Other assessment tools include a internship plan and learning contract between the student, industry supervisor, and academic supervisor; job application including responses to selection criteria; and presentation to peers, academics and industry representatives at a poster session. The paper discusses the development of the internship unit over three years, particularly learning activities and assessment. The reflection and refinement of the unit is informed by a pedagogical framework, and the development of processes to best manage placement for all stakeholders. A model of best practice is proposed, that can be adapted to a variety of discipline areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reflective practice is widely considered across discussions around educational psychology, professional identity, employability of graduates, and generic or graduate capabilities. Critical reflection is essential for providing a bridge between the university and the workplace, and for ultimately preparing work ready graduates (Patrick et al, 2008). Work integrated learning, particularly through internships and work placements for students, is viewed as a valuable approach for students developing skills in reflective practice. Reflective journals are one of the tools often used to encourage and develop student reflection. Shifting the reflective journal to an online interface as a reflective blog presents opportunities for more meaningful, frequent and richer interaction between the key players in a work integrated learning experience. This paper examines the adoption, implementation and refinement of the use of reflective blogs in a work integrated learning unit for business students majoring in advertising, marketing and public relations disciplines. The reflective blog is discussed as a learning and assessment tool, including the approaches taken to integrate and scaffold the blog as part of the work integrated learning experience. Graduate capabilities were used as cornerstones for students to frame students’ thinking, experiences and reflection. These capabilities emphasise the value of coherent theoretical and practical knowledge, coupled with critical, creative and analytical thinking, problem solving skills, self reliance and resilience. Underlying these graduate capabilities is a focus on assessment for learning matched with assessment of learning. Using specific triggers and prompts as part of the reflective process, and incorporating ongoing feedback from academic supervisors, students moved from descriptive levels of reflection, to more meaningful and critical reflection. Students’ blogs are analysed to identify key themes, challenges and achievements in the work integrated learning experience. Suggestions for further development and improvement, together with a model of best practice, are proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article reports on a research program that has developed new methodologies for mapping the Australian blogosphere and tracking how information is disseminated across it. The authors improve on conventional web crawling methodologies in a number of significant ways: First, the authors track blogging activity as it occurs, by scraping new blog posts when such posts are announced through Really Simple Syndication (RSS) feeds. Second, the authors use custom-made tools that distinguish between the different types of content and thus allow us to analyze only the salient discursive content provided by bloggers. Finally, the authors are able to examine these better quality data using both link network mapping and textual analysis tools, to produce both cumulative longer term maps of interlinkages and themes, and specific shorter term snapshots of current activity that indicate current clusters of heavy interlinkage and highlight their key themes. In this article, the authors discuss findings from a yearlong observation of the Australian political blogosphere, suggesting that Australian political bloggers consistently address current affairs, but interpret them differently from mainstream news outlets. The article also discusses the next stage of the project, which extends this approach to an examination of other social networks used by Australians, including Twitter, YouTube, and Flickr. This adaptation of our methodology moves away from narrow models of political communication, and toward an investigation of everyday and popular communication, providing a more inclusive and detailed picture of the Australian networked public sphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Northern Hemisphere slumbers, dreaming that – one day – it is going to split up its empire, before the seas boil and the towers collapse. During this same dark night, Australia is wide awake, chirpy as a Canadian, strapping as a Bondi blonde, having an election...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. This article will discuss a research project that fills this gap. Funded by the Australian Learning and Teaching Council, the project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. This article will provide a detailed discussion on each of these themes. The study’s findings also suggest that “librarian 2.0” is a state of mind, and that the Australian LIS profession is undergoing a significant shift in “attitude.”

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are at least four key challenges in the online news environment that computational journalism may address. Firstly, news providers operate in a rapidly evolving environment and larger businesses are typically slower to adapt to market innovations. News consumption patterns have changed and news providers need to find new ways to capture and retain digital users. Meanwhile, declining financial performance has led to cost cuts in mass market newspapers. Finally investigative reporting is typically slow, high cost and may be tedious, and yet is valuable to the reputation of a news provider. Computational journalism involves the application of software and technologies to the activities of journalism, and it draws from the fields of computer science, social science and communications. New technologies may enhance the traditional aims of journalism, or may require “a new breed of people who are midway between technologists and journalists” (Irfan Essa in Mecklin 2009: 3). Historically referred to as ‘computer assisted reporting’, the use of software in online reportage is increasingly valuable due to three factors: larger datasets are becoming publicly available; software is becoming sophisticated and ubiquitous; and the developing Australian digital economy. This paper introduces key elements of computational journalism – it describes why it is needed; what it involves; benefits and challenges; and provides a case study and examples. Computational techniques can quickly provide a solid factual basis for original investigative journalism and may increase interaction with readers, when correctly used. It is a major opportunity to enhance the delivery of original investigative journalism, which ultimately may attract and retain readers online.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Being in paid employment is socially valued, and is linked to health, financial security and time use. Issues arising from a lack of occupational choice and control, and from diminished role partnerships are particularly problematic in the lives of people with an intellectual disability. Informal support networks are shown to influence work opportunities for people without disabilities, but their impact on the work experiences of people with disability has not been thoroughly explored. The experience of 'work' and preparation for work was explored with a group of four people with an intellectual disability (the participants) and the key members of their informal support networks (network members) in New South Wales, Australia. Network members and participants were interviewed and participant observations of work and other activities were undertaken. Data analysis included open, conceptual and thematic coding. Data analysis software assisted in managing the large datasets across multiple team members. The insight and actions of network members created and sustained the employment and support opportunities that effectively matched the needs and interests of the participants. Recommendations for future research are outlined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At QUT research data refers to information that is generated or collected to be used as primary sources in the production of original research results, and which would be required to validate or replicate research findings (Callan, De Vine, & Baker, 2010). Making publicly funded research data discoverable by the broader research community and the public is a key aim of the Australian National Data Service (ANDS). Queensland University of Technology (QUT) has been innovating in this space by undertaking mutually dependant technical and content (metadata) focused projects funded by ANDS. Research Data Librarians identified and described datasets generated from Category 1 funded research at QUT, by interviewing researchers, collecting metadata and fashioning metadata records for upload to the Australian Research Data commons (ARDC) and exposure through the Research Data Australia interface. In parallel to this project, a Research Data Management Service and Metadata hub project were being undertaken by QUT High Performance Computing & Research Support specialists. These projects will collectively store and aggregate QUT’s metadata and research data from multiple repositories and administration systems and contribute metadata directly by OAI-PMH compliant feed to RDA. The pioneering nature of the work has resulted in a collaborative project dynamic where good data management practices and the discoverability and sharing of research data were the shared drivers for all activity. Each project’s development and progress was dependent on feedback from the other. The metadata structure evolved in tandem with the development of the repository and the development of the repository interface responded to meet the needs of the data interview process. The project environment was one of bottom-up collaborative approaches to process and system development which matched top-down strategic alliances crossing organisational boundaries in order to provide the deliverables required by ANDS. This paper showcases the work undertaken at QUT, focusing on the Seeding the Commons project as a case study, and illustrates how the data management projects are interconnected. It describes the processes and systems being established to make QUT research data more visible and the nature of the collaborations between organisational areas required to achieve this. The paper concludes with the Seeding the Commons project outcomes and the contribution this project made to getting more research data ‘out there’.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In automatic facial expression detection, very accurate registration is desired which can be achieved via a deformable model approach where a dense mesh of 60-70 points on the face is used, such as an active appearance model (AAM). However, for applications where manually labeling frames is prohibitive, AAMs do not work well as they do not generalize well to unseen subjects. As such, a more coarse approach is taken for person-independent facial expression detection, where just a couple of key features (such as face and eyes) are tracked using a Viola-Jones type approach. The tracked image is normally post-processed to encode for shift and illumination invariance using a linear bank of filters. Recently, it was shown that this preprocessing step is of no benefit when close to ideal registration has been obtained. In this paper, we present a system based on the Constrained Local Model (CLM) which is a generic or person-independent face alignment algorithm which gains high accuracy. We show these results against the LBP feature extraction on the CK+ and GEMEP datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ross River virus (RRV) is a mosquito-borne member of the genus Alphavirus that causes epidemic polyarthritis in humans, costing the Australian health system at least US$10 million annually. Recent progress in RRV vaccine development requires accurate assessment of RRV genetic diversity and evolution, particularly as they may affect the utility of future vaccination. In this study, we provide novel RRV genome sequences and investigate the evolutionary dynamics of RRV from time-structured E2 gene datasets. Our analysis indicates that, although RRV evolves at a similar rate to other alphaviruses (mean evolutionary rate of approx. 8x10(-4) nucleotide substitutions per site year(-1)), the relative genetic diversity of RRV has been continuously low through time, possibly as a result of purifying selection imposed by replication in a wide range of natural host and vector species. Together, these findings suggest that vaccination against RRV is unlikely to result in the rapid antigenic evolution that could compromise the future efficacy of current RRV vaccines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anthropometric assessment is a simple, safe, and cost-efficient method to examine the health status of individu-als. The Japanese obesity classification based on the sum of two skin folds (Σ2SF) was proposed nearly 40 years ago therefore its applicability to Japanese living today is unknown. The current study aimed to determine Σ2SF cut-off values that correspond to percent body fat (%BF) and BMI values using two datasets from young Japa-nese adults (233 males and 139 females). Using regression analysis, Σ2SF and height-corrected Σ2SF (HtΣ2SF) values that correspond to %BF of 20, 25, and 30% for males and 30, 35, and 40% for females were determined. In addition, cut-off values of both Σ2SF and HtΣ2SF that correspond to BMI values of 23 kg/m2, 25 kg/m2 and 30 kg/m2 were determined. In comparison with the original Σ2SF values, the proposed values are smaller by about 10 mm at maximum. The proposed values show an improvement in sensitivity from about 25% to above 90% to identify individuals with ≥20% body fat in males and ≥30% body fat in females with high specificity of about 95% in both genders. The results indicate that the original Σ2SF cut-off values to screen obese individuals cannot be applied to young Japanese adults living today and modification is required. Application of the pro-posed values may assist screening in the clinical setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Process models in organizational collections are typically modeled by the same team and using the same conventions. As such, these models share many characteristic features like size range, type and frequency of errors. In most cases merely small samples of these collections are available due to e.g. the sensitive information they contain. Because of their sizes, these samples may not provide an accurate representation of the characteristics of the originating collection. This paper deals with the problem of constructing collections of process models, in the form of Petri nets, from small samples of a collection for accurate estimations of the characteristics of this collection. Given a small sample of process models drawn from a real-life collection, we mine a set of generation parameters that we use to generate arbitrary-large collections that feature the same characteristics of the original collection. In this way we can estimate the characteristics of the original collection on the generated collections.We extensively evaluate the quality of our technique on various sample datasets drawn from both research and industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detection of Region of Interest (ROI) in a video leads to more efficient utilization of bandwidth. This is because any ROIs in a given frame can be encoded in higher quality than the rest of that frame, with little or no degradation of quality from the perception of the viewers. Consequently, it is not necessary to uniformly encode the whole video in high quality. One approach to determine ROIs is to use saliency detectors to locate salient regions. This paper proposes a methodology for obtaining ground truth saliency maps to measure the effectiveness of ROI detection by considering the role of user experience during the labelling process of such maps. User perceptions can be captured and incorporated into the definition of salience in a particular video, taking advantage of human visual recall within a given context. Experiments with two state-of-the-art saliency detectors validate the effectiveness of this approach to validating visual saliency in video. This paper will provide the relevant datasets associated with the experiments.