360 resultados para Popular literature
Resumo:
National Housing Relics and Scenic Sites (NHRSSs) in China are the equivalent of National Parks in the West but have contrasting features and broader roles when compared to their Western counterparts. By reviewing and analysing more than 370 academic sources, this paper identifies 6 major issue clusters and future challenges that will influence the management of NHRSSs over time. It also provides a number of cases to illustrate the particular features of NHRSSs. Identifying the hot issues and important challenges in Chinese NHRSSs will provide valuable insights into priorities now being discussed in highly populated areas of the World.
Resumo:
Aim To identify the reasons why nurses continue migrating across international borders. Background International nurse recruitment and migration have been increasing in the last decade and recent trends show an increase in the movement of nurses between developing and developed countries, resulting in a worldwide shortage of nurses. Methods A manual and electronic database literature search was conducted from January 2004 to May 2010. Qualitative content analysis was completed for the final 17 articles that satisfied the inclusion criteria. Results Motivators to nurse migration were linked to financial, professional, political, social and personal factors. Although economic factors were the most commonly reported, they were not the only reason for migration. This was especially evident among nurses migrating between developed countries. Conclusion Nurses migrate for a wide variety of reasons as they respond to push and pull factors. Implications for nursing management It is important for nurse managers in the source countries to advocate incentives to retain nurses. In the recipient countries the number of international nurses continues to increase implying the need for more innovative ways to mentor and orientate these nurses.
Resumo:
With the rise in attacks and attempted attacks on marine‐based critical infrastructure, maritime security is an issue of increasing importance worldwide. However, there are three significant shortfalls in the efforts to overcome potential threats to maritime security: the need for greater understanding of whether current standards of best practice are truly successful in combating and reducing the risks of terrorism and other security issues, the absence of a collective maritime security best practice framework and the need for improved access to maritime security specific graduate and postgraduate (long) courses. This paper presents an overview of existing international, regional national standards of best practice and shows that literature concerning the measurement and/ or success of standards is virtually non‐existent. In addition, despite the importance of maritime workers to ensuring the safety of marine based critical infrastructure, a similar review of available Australian education courses shows a considerable lack of availability of maritime security‐specific courses other than short courses that cover only basic security matters. We argue that the absence of an Australian best practice framework informed by evaluation of current policy responses – particularly in the post 9/11 environment – leaves Australia vulnerable to maritime security threats. As this paper shows, the reality is that despite the security measures put in place post 9/11, there is still considerable work to be done to ensure Australia is equipped to overcome the threats posed to maritime security.
Resumo:
This publication is the first in a series of scholarly reports on research-based practice related to the First Year Experience in Higher Education. This report synthesises evidence about practice-based initiatives and pragmatic approaches in Aotearoa (New Zealand) and Australia that aim to enhance the experience of commencing students in the higher education sector. Trends in policies, programs and practices ... examines the first year experience literature from 2000-2010. It acknowledges the uniqueness of the Australasian socio-political context and its influence on the interests and output of researchers. The review surveyed almost 400 empirical reports and conceptual discussions produced over the decade that dealt with the stakeholders, institutions and the higher education sector in Australasia. The literature is examined through two theoretical constructs or “lenses”: first, a set of first year curriculum design principles and second, the generational approach to describing the maturation of initiatives. These outcomes and suggested directions for further research provide the challenges and the opportunities for FYE adherents, both scholars and practitioners, to grapple with in the next decade.
Resumo:
Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.
Resumo:
Organizations seeking improvements in their performance are increasingly exploring alternative models and approaches for providing support services; one such approach being Shared Services. Because of the possible consequential impact of Shared Services on organizations, and given that information systems (IS) is both an enabler of Shared Services (for other functional areas) as well as a promising area for Shared Services application, Shared Services is an important area for research in the IS field. Though Shared Services has been extensively adopted on the promise of economies of scale and scope, factors of Shared Services success (or failure) have received little research attention. This paper reports the distillation of success and failure factors of Shared Services from an IS perspective. Employing NVIVO and content analysis of 158 selected articles, 9 key success factors and 5 failure factors are identified, suggesting important implications for practice and further research.
Resumo:
The relationship between weather and mortality has been observed for centuries. Recently, studies on temperature-related mortality have become a popular topic as climate change continues. Most of the previous studies found that exposure to hot or cold temperature affects mortality. This study aims to address three research questions: 1. What is the overall effect of daily mean temperature variation on the elderly mortality in the published literature using a meta-analysis approach? 2. Does the association between temperature and mortality differ with age, sex, or socio-economic status in Brisbane? 3. How is the magnitude of the lag effects of the daily mean temperature on mortality varied by age and cause-of-death groups in Brisbane? In the meta-analysis, there was a 1-2 % increase in all-cause mortality for a 1ºC decrease during cold temperature intervals and a 2-5% increase for a 1ºC increment during hot temperature intervals among the elderly. Lags of up to 9 days in exposure to cold temperature intervals were statistically significantly associated with all-cause mortality, but no significant lag effects were observed for hot temperature intervals. In Brisbane, the harmful effect of high temperature (over 24ºC) on mortality appeared to be greater among the elderly than other age groups. The effect estimate among women was greater than among men. However, No evidence was found that socio-economic status modified the temperature-mortality relationship. The results of this research also show longer lag effects in cold days and shorter lag effects in hot days. For 3-day hot effects associated with 1°C increase above the threshold, the highest percent increases in mortality occurred among people aged 85 years or over (5.4% (95% CI: 1.4%, 9.5%)) compared with all age group (3.2% (95% CI: 0.9%, 5.6%)). The effect estimate among cardiovascular deaths was slightly higher than those among all-cause mortality. For overall 21-day cold effects associated with a 1°C decrease below the threshold, the percent estimates in mortality for people aged 85 years or over, and from cardiovascular diseases were 3.9% (95% CI: 1.9%, 6.0%) and 3.4% (95% CI: 0.9%, 6.0%), respectively compared with all age group (2.0% (95% CI: 0.7%, 3.3%)). Little research of this kind has been conducted in the Southern Hemisphere. This PhD research may contribute to the quantitative assessment of the overall impact, effect modification and lag effects of temperature variation on mortality in Australia and The findings may provide useful information for the development and implementation of public health policies to reduce and prevent temperature-related health problems.
Resumo:
A broad range of positions is articulated in the academic literature around the relationship between recordings and live performance. Auslander (2008) argues that “live performance ceased long ago to be the primary experience of popular music, with the result that most live performances of popular music now seek to replicate the music on the recording”. Elliott (1995) suggests that “hit songs are often conceived and produced as unambiguous and meticulously recorded performances that their originators often duplicate exactly in live performances”. Wurtzler (1992) argues that “as socially and historically produced, the categories of the live and the recorded are defined in a mutually exclusive relationship, in that the notion of the live is premised on the absence of recording and the defining fact of the recorded is the absence of the live”. Yet many artists perform in ways that fundamentally challenge such positions. Whilst it is common practice for musicians across many musical genres to compose and construct their musical works in the studio such that the recording is, in Auslander’s words, the ‘original performance’, the live version is not simply an attempt to replicate the recorded version. Indeed in some cases, such replication is impossible. There are well known historical examples. Queen, for example, never performed the a cappella sections of Bohemian Rhapsody because it they were too complex to perform live. A 1966 recording of the Beach Boys studio creation Good Vibrations shows them struggling through the song prior to its release. This paper argues that as technology develops, the lines between the recording studio and live performance change and become more blurred. New models for performance emerge. In a 2010 live performance given by Grammy Award winning artist Imogen Heap in New York, the artist undertakes a live, improvised construction of a piece as a performative act. She invites the audience to choose the key for the track and proceeds to layer up the various parts in front of the audience as a live performance act. Her recording process is thus revealed on stage in real time and she performs a process that what would have once been confined to the recording studio. So how do artists bring studio production processes into the live context? What aspects of studio production are now performable and what consistent models can be identified amongst the various approaches now seen? This paper will present an overview of approaches to performative realisations of studio produced tracks and will illuminate some emerging relationships between recorded music and performance across a range of contexts.
Resumo:
In this paper, we describe the main processes and operations in mining industries and present a comprehensive survey of operations research methodologies that have been applied over the last several decades. The literature review is classified into four main categories: mine design; mine production; mine transportation; and mine evaluation. Mining design models are further separated according to two main mining methods: open-pit and underground. Moreover, mine production models are subcategorised into two groups: ore mining and coal mining. Mine transportation models are further partitioned in accordance with fleet management, truck haulage and train scheduling. Mine evaluation models are further subdivided into four clusters in terms of mining method selection, quality control, financial risks and environmental protection. The main characteristics of four Australian commercial mining software are addressed and compared. This paper bridges the gaps in the literature and motivates researchers to develop more applicable, realistic and comprehensive operations research models and solution techniques that are directly linked with mining industries.
Resumo:
Sound Thinking provides techniques and approaches to critically listen, think, talk and write about music you hear or make. It provides tips on making music and it encourages regular and deep thinking about music activities, which helps build a musical dialog that leads to deeper understanding.
Resumo:
Finite Element Modeling (FEM) has become a vital tool in the automotive design and development processes. FEM of the human body is a technique capable of estimating parameters that are difficult to measure in experimental studies with the human body segments being modeled as complex and dynamic entities. Several studies have been dedicated to attain close-to-real FEMs of the human body (Pankoke and Siefert 2007; Amann, Huschenbeth et al. 2009; ESI 2010). The aim of this paper is to identify and appraise the state of-the art models of the human body which incorporate detailed pelvis and/or lower extremity models. Six databases and search engines were used to obtain literature, and the search was limited to studies published in English since 2000. The initial search results identified 636 pelvis-related papers, 834 buttocks-related papers, 505 thigh-related papers, 927 femur-related papers, 2039 knee-related papers, 655 shank-related papers, 292 tibia-related papers, 110 fibula-related papers, 644 ankle related papers, and 5660 foot-related papers. A refined search returned 100 pelvis-related papers, 45 buttocks related papers, 65 thigh-related papers, 162 femur-related papers, 195 kneerelated papers, 37 shank-related papers, 80 tibia-related papers, 30 fibula-related papers and 102 ankle-related papers and 246 foot-related papers. The refined literature list was further restricted by appraisal against a modified LOW appraisal criteria. Studies with unclear methodologies, with a focus on populations with pathology or with sport related dynamic motion modeling were excluded. The final literature list included fifteen models and each was assessed against the percentile the model represents, the gender the model was based on, the human body segment/segments included in the model, the sample size used to develop the model, the source of geometric/anthropometric values used to develop the model, the posture the model represents and the finite element solver used for the model. The results of this literature review provide indication of bias in the available models towards 50th percentile male modeling with a notable concentration on the pelvis, femur and buttocks segments.
Resumo:
This dissertation examines the compliance and performance of a large sample of faith based (religious) ethical funds - the Shari'ah-compliant equity funds (SEFs), which may be viewed as a form of ethical investing. SEFs screen their investment for compliance with Islamic law, where riba (conventional interest expense), maysir (gambling), gharar (excessive uncertainty), and non-halal (non-ethical) products are prohibited. Using a set of stringent Shari'ah screens similar to those of MSCI Islamic, we first examine the extent to which SEFs comply with the Shari'ah law. Results show that only about 27% of the equities held by SEFs are Shari'ah-compliant. While most of the fund holdings pass the business screens, only about 42% pass the total debt to total assets ratio screen. This finding suggests that, in order to overcome a significant reduction in the investment opportunity, Shari'ah principles are compromised, with SEFs adopting lax screening rules so as to achieve a financial performance. While younger funds and funds that charge higher fees and are domiciled in more Muslim countries are more Shari'ah-compliant, we find little evidence of a positive relationship between fund disclosure of the Shari'ah compliance framework and Shari'ah-compliance. Clearly, Shari'ah compliance remains a major challenge for fund managers and SEF investors should be aware of Shari'ah-compliance risk since the fund managers do not always fulfill their fiduciary obligation, as promised in their prospectus. Employing a matched firm approach for a survivorship free sample of 387 SEFs, we then examine an issue that has been heavily debated in the literature: Does ethical screening reduce investment performance? Results show that it does but only by an average of 0.04% per month if benchmarked against matched conventional funds - this is a relatively small price to pay for religious faith. Cross-sectional regressions show an inverse relationship between Shari'ah compliance and fund performance: every one percentage increase in total compliance decreases fund performance by 0.01% per month. However, compliance fails to explain differences in the performance between SEFs and matched funds. Although SEFs do not generally perform better during crisis periods, further analysis shows evidence of better performance relative to conventional funds only during the recent Global Financial Crisis; the latter is consistent with popular media claims.
Resumo:
Diaspora philanthropy is a popular buzzword; however, what the term encompasses or how institutionalised the phenomenon is remains an open question. There are as many views and definitions of diaspora philanthropy as there are diaspora communities involved. It is often seen as a potential source of funding for geographic regions, religions or ethnic communities globally. But identifying a framework for diaspora philanthropy is difficult. Unlike the literature on international philanthropy (including ethnic philanthropy and cross-border philanthropy), which has been a predominant topic of interest in recent years, the literature on diaspora philanthropy is scarce. There is a variety of opinion on what should and should not be considered under this scribe, which makes it impossible to provide a definitive description of diaspora philanthropy that suits everyone. The term “diaspora” has different meanings for different individuals and groups of people. Some see it as relating only to exiled and ejected communities of people; others use the term to refer to individuals or groups who are living in a new homeland whether by choice or circumstance. This paper defines “diaspora” in terms of an individual or group which identifies with an original homeland, (either theirs or a member of their family’s such as a grandparent), and is in the diaspora whether through their choice or a circumstance beyond their control. This obligatory identification towards a homeland differentiates this study on diaspora philanthropy from those that define it as an affiliation with a religious community and not necessarily a specific homeland.
Resumo:
Applying ice or other forms of topical cooling is a popular method of treating sports injuries. It is commonplace for athletes to return to competitive activity, shortly or immediately after the application of a cold treatment. In this article, we examine the effect of local tissue cooling on outcomes relating to functional performance and to discuss their relevance to the sporting environment. A computerized literature search, citation tracking and hand search was performed up to April, 2011. Eligible studies were trials involving healthy human participants, describing the effects of cooling on outcomes relating to functional performance. Two reviewers independently assessed the validity of included trials and calculated effect sizes. Thirty five trials met the inclusion criteria; all had a high risk of bias. The mean sample size was 19. Meta-analyses were not undertaken due to clinical heterogeneity. The majority of studies used cooling durations >20 minutes. Strength (peak torque/force) was reported by 25 studies with approximately 75% recording a decrease in strength immediately following cooling. There was evidence from six studies that cooling adversely affected speed, power and agility-based running tasks; two studies found this was negated with a short rewarming period. There was conflicting evidence on the effect of cooling on isolated muscular endurance. A small number of studies found that cooling decreased upper limb dexterity and accuracy. The current evidence base suggests that athletes will probably be at a performance disadvantage if they return to activity immediately after cooling. This is based on cooling for longer than 20 minutes, which may exceed the durations employed in some sporting environments. In addition, some of the reported changes were clinically small and may only be relevant in elite sport. Until better evidence is available, practitioners should use short cooling applications and/or undertake a progressive warm up prior to returning to play.
Resumo:
Some years ago I opened the 1998 edition of the Johns Hopkins Guide to Literary Theory and Criticism and turned to the entry ‘Australian Theory and Criticism.’ This read: ‘Australia has produced no single critic or theorist of international stature, nor has it developed a distinct school of criticism or theory.’ Postcolonial content was listed under a section called Postcolonial Cultural Studies and there one found key names including Tiffin, Ashcroft, Stephen Slemon, and During...