916 resultados para TIME TRENDS
Resumo:
Purpose – The paper aims to describe a workforce-planning model developed in-house in an Australian university library that is based on rigorous environmental scanning of an institution, the profession and the sector. Design/methodology/approach – The paper uses a case study that describes the stages of the planning process undertaken to develop the Library’s Workforce Plan and the documentation produced. Findings – While it has been found that the process has had successful and productive outcomes, workforce planning is an ongoing process. To remain effective, the workforce plan needs to be reviewed annually in the context of the library’s overall planning program. This is imperative if the plan is to remain current and to be regarded as a living document that will continue to guide library practice. Research limitations/implications – Although a single case study, the work has been contextualized within the wider research into workforce planning. Practical implications – The paper provides a model that can easily be deployed within a library without external or specialist consultant skills, and due to its scalability can be applied at department or wider level. Originality/value – The paper identifies the trends impacting on, and the emerging opportunities for, university libraries and provides a model for workforce planning that recognizes the context and culture of the organization as key drivers in determining workforce planning. Keywords - Australia, University libraries, Academic libraries, Change management, Manpower planning Paper type - Case study
Resumo:
Overweight and obesity are two of the most important emerging public health issues in our time and regarded by the World Health Organisation [WHO] (1998) as a worldwide epidemic. The prevalence of obesity in the USA is the highest in the world, and Australian obesity rates fall into second place. Currently, about 60% of Australian adults are overweight (BMI „d 25kg/m2). The socio-demographic factors associated with overweight and/or obesity have been well demonstrated, but many of the existing studies only examined these relationships at one point of time, and did not examine whether significant relationships changed over time. Furthermore, only limited previous research has examined the issue of the relationship between perception of weight status and actual weight status, as well as factors that may impact on people¡¦s perception of their body weight status. Aims: The aims of the proposed research are to analyse the discrepancy between perceptions of weight status and actual weight status in Australian adults; to examine if there are trends in perceptions of weight status in adults between 1995 to 2004/5; and to propose a range of health promotion strategies and furth er research that may be useful in managing physical activity, healthy diet, and weight reduction. Hypotheses: Four alternate hypotheses are examined by the research: (1) there are associations between independent variables (e.g. socio -demographic factors, physical activity and dietary habits) and overweight and/or obesity; (2) there are associations between the same independent variables and the perception of overweight; (3) there are associations between the same independent variables and the discrepancy between weight status and perception of weight status; and (4) there are trends in overweight and/or obesity, perception of overweight, and the discrepancy in Australian adults from 1995 to 2004/5. Conceptual Framework and Methods: A conceptual framework is developed that shows the associations identified among socio -demographic factors, physical activity and dietary habits with actual weight status, as well as examining perception of weight status. The three latest National Health Survey data bases (1995 , 2001 and 2004/5) were used as the primary data sources. A total of 74,114 Australian adults aged 20 years and over were recruited from these databases. Descriptive statistics, bivariate analyses (One -Way ANOVA tests, unpaired t-tests and Pearson chi-square tests), and multinomial logistic regression modelling were used to analyse the data. Findings: This research reveals that gender, main language spoken at home, occupation status, household structure, private health insurance status, and exercise are related to the discrepancy between actual weight status and perception of weight status, but only gender and exercise are related to the discrepancy across the three time point s. The current research provides more knowledge about perception of weight status independently. Factors which affect perception of overweight are gender, age, language spoken at home, private health insurance status, and diet ary habits. The study also finds that many factors that impact overweight and/or obesity also have an effect on perception of overweight, such as age, language spoken at home, household structure, and exercise. However, some factors (i.e. private health insurance status and milk consumption) only impact on perception of overweight. Furthermore, factors that are rel ated to people’s overweight are not totally related to people’s underestimation of their body weight status in the study results. Thus, there are unknown factors which can affect people’s underestimation of their body weight status. Conclusions: Health promotion and education activities should provide education about population health education and promotion and education for particular at risk sub -groups. Further research should take the form of a longitudinal study design ed to examine the causal relationship between overweight and/or obesity and underestimation of body weight status, it should also place more attention on the relationships between overweight and/or obesity and dietary habits, with a more comprehensive representation of SES. Moreover, further research that deals with identification of characteristics about perception of weight status, in particular the underestimation of body weight status should be undertaken.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
Over recent decades there has been growing interest in the role of non-motorized modes in the overall transport system (especially walking and cycling for private purposes) and many government initiatives have been taken to encourage these active modes. However there has been relatively little research attention given to the paid form of non-motorized travel which can be called non-motorized public transport (NMPT). This involves cycle-powered vehicles which can carry several passengers (plus the driver) and a small amount of goods; and which provide flexible hail-and-ride services. Effectively they are non-motorized taxis. Common forms include cycle-rickshaw (Bangladesh, India), becak (Indonesia), cyclos (Vietnam, Cambodia), bicitaxi (Columbia, Cuba), velo-taxi (Germany, Netherland), and pedicabs (UK, Japan, USA). --------- The popularity of NMPT is widespread in developing countries, where it caters for a wide range of mobility needs. For instance in Dhaka, Bangladesh, rickshaws are the preferred mode for non-walk trips and have a higher mode share than cars or buses. Factors that underlie the continued existence and popularity of NMPT in many developing countries include positive contribution to social equity, micro-macro economic significance, employment creation, and suitability for narrow and crowded streets. Although top speeds are lower than motorized modes, NMPT is competitive and cost-effective for short distance door-to-door trips that make up the bulk of travel in many developing cities. In addition, NMPT is often the preferred mode for vulnerable groups such as females, children and elderly people. NMPT is more prominent in developing countries but its popularity and significance is also gradually increasing in several developed countries of Asia, Europe and parts of North America, where there is a trend for the NMPT usage pattern to broaden from tourism to public transport. This shift is due to a number of factors including the eco-sustainable nature of NMPT; its operating flexibility (such as in areas where motorized vehicle access is restricted or discouraged through pricing); and the dynamics that it adds to the urban fabric. Whereas NMPT may have been seen as a “dying” mode, in many cities it is maintaining or increasing its significance and with potential for further growth. --------- This paper will examine and analyze global trends in NMPT incorporating both developing and developed country contexts and issues such as usage patterns; NMPT policy and management practices; technological development; and operational integration of NMPT into the overall transport system. It will look at how NMPT policies, practices and usage have changed over time and the differing trends in developing and developed countries. In particular, it will use Dhaka, Bangladesh as a case study in recognition of its standing as the major NMPT city in the world. The aim is to highlight NMPT issues and trends and their significance for shaping future policy towards NMPT in developing and developed countries. The paper will be of interest to transport planners, traffic engineers, urban and regional planners, environmentalists, economists and policy makers.
Resumo:
Purpose: The aim was to document contact lens prescribing trends in Australia between 2000 and 2009. ---------- Methods: A survey of contact lens prescribing trends was conducted each year between 2000 and 2009. Australian optometrists were asked to provide information relating to 10 consecutive contact lens fittings between January and March each year. ---------- Results: Over the 10-year survey period, 1,462 practitioners returned survey forms representing a total of 13,721 contact lens fittings. The mean age (± SD) of lens wearers was 33.2 ± 13.6 years and 65 per cent were female. Between 2006 and 2009, rigid lens new fittings decreased from 18 to one per cent. Low water content lenses reduced from 11.5 to 3.2 per cent of soft lens fittings between 2000 and 2008. Between 2005 and 2009, toric lenses and multifocal lenses represented 26 and eight per cent, respectively, of all soft lenses fitted. Daily disposable, one- to two-week replacement and monthly replacement lenses accounted for 11.6, 30.0 and 46.5 per cent of all soft lens fittings over the survey period, respectively. The proportion of new soft fittings and refittings prescribed as extended wear has generally declined throughout the past decade. Multi-purpose lens care solutions dominate the market. Rigid lenses and monthly replacement soft lenses are predominantly worn on a full-time basis, whereas daily disposable soft lenses are mainly worn part-time.---------- Conclusions: This survey indicates that technological advances, such as the development of new lens materials, manufacturing methods and lens designs, and the availability of various lens replacement options, have had a significant impact on the contact lens market during the first decade of the 21st Century.
Resumo:
Extended wear has long been the ‘holy grail’ of contact lenses by virtue of the increased convenience and freedom of lifestyle which they accord; however, this modality enjoyed only limited market success during the last quarter of the 20th century. The introduction of silicone hydrogel materials into the market at the beginning of this century heralded the promise of successful extended wear due to the superior oxygen performance of this lens type. To assess patterns of contact lens fitting, including extended wear, over the past decade, up to 1000 survey forms were sent to contact lens fitters in Australia, Canada, Japan, the Netherlands, Norway, the UK and the USA each year between 2000 and 2009. Practitioners were asked to record data relating to the first 10 contact lens fits or refits performed after receiving the survey form. Analysis of returned forms revealed that, averaged over this period, 9% of all soft lenses prescribed were for extended wear, with national figures ranging from 2% in Japan to 17% in Norway. The trend over the past decade has been for an increase from about 5% of all soft lens fits in 2000 to a peak of between 9 and 12% between 2002 and 2007, followed by a decline to around 7% in 2009. A person receiving extended wear lenses is likely to be an older female who is being refitted with silicone hydrogel lenses for full-time wear. Although extended wear has yet again failed to fulfil the promise of being the dominant contact lens wearing modality, it is still a viable option for many people.
Resumo:
All levels of government continue to advocate increasing the number of people cycling for recreation and transport. However, governments and the general public still have concerns about the implications for the safety of cyclists and other road users. While there is concern about injury for bicycle-pedestrian collisions, for 2008-09 in Australia only 40 pedestrians were hospitalised as a result of a collision with a cyclist (and 33 cyclists from collisions with pedestrians). There is little research that observes changes over time in actual cyclist behaviours and interactions with other road users. This paper presents the results of an observational study of cycling in the Brisbane Central Business District based on data collected using the same methodology in October 2010 and 2012.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence, reducing the frequency of crashes assists in addressing congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a short time window around the time of a crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists. We will compare them with normal traffic trends and show this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding to traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash. Using the K-Means clustering method with Euclidean distance function allowed the crashes to be clustered. Then, normal situation data was extracted based on the time distribution of crashes and were clustered to compare with the “high risk” clusters. Five major trends have been found in the clustering results for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Based on these findings, crash likelihood estimation models can be fine-tuned based on the monitored traffic conditions with a sliding window of 30 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestion. Hence, reducing the frequency of crashes assist in addressing congestion issues (Meyer, 2008). Analysing traffic conditions and discovering risky traffic trends and patterns are essential basics in crash likelihood estimations studies and still require more attention and investigation. In this paper we will show, through data mining techniques, that there is a relationship between pre-crash traffic flow patterns and crash occurrence on motorways, compare them with normal traffic trends, and that this knowledge has the potentiality to improve the accuracy of existing crash likelihood estimation models, and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash occurrence. K-Means clustering algorithm applied to determine dominant pre-crash traffic patterns. In the first phase of this research, traffic regimes identified by analysing crashes and normal traffic situations using half an hour speed in upstream locations of crashes. Then, the second phase investigated the different combination of speed risk indicators to distinguish crashes from normal traffic situations more precisely. Five major trends have been found in the first phase of this paper for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Moreover, the second phase explains that spatiotemporal difference of speed is a better risk indicator among different combinations of speed related risk indicators. Based on these findings, crash likelihood estimation models can be fine-tuned to increase accuracy of estimations and minimize false alarms.
Resumo:
Abstract BACKGROUND: An examination of melanoma incidence according to anatomical region may be one method of monitoring the impact of public health initiatives. OBJECTIVES: To examine melanoma incidence trends by body site, sex and age at diagnosis or body site and morphology in a population at high risk. MATERIALS AND METHODS: Population-based data on invasive melanoma cases (n = 51473) diagnosed between 1982 and 2008 were extracted from the Queensland Cancer Registry. Age-standardized incidence rates were calculated using the direct method (2000 world standard population) and joinpoint regression models were used to fit trend lines. RESULTS: Significantly decreasing trends for melanomas on the trunk and upper limbs/shoulders were observed during recent years for both sexes under the age of 40 years and among males aged 40-59years. However, in the 60 and over age group, the incidence of melanoma is continuing to increase at all sites (apart from the trunk) for males and on the scalp/neck and upper limbs/shoulders for females. Rates of nodular melanoma are currently decreasing on the trunk and lower limbs. In contrast, superficial spreading melanoma is significantly increasing on the scalp/neck and lower limbs, along with substantial increases in lentigo maligna melanoma since the late 1990s at all sites apart from the lower limbs. CONCLUSIONS: In this large study we have observed significant decreases in rates of invasive melanoma in the younger age groups on less frequently exposed body sites. These results may provide some indirect evidence of the impact of long-running primary prevention campaigns.
Resumo:
Background Transmission of Plasmodium vivax malaria is dependent on vector availability, biting rates and parasite development. In turn, each of these is influenced by climatic conditions. Correlations have previously been detected between seasonal rainfall, temperature and malaria incidence patterns in various settings. An understanding of seasonal patterns of malaria, and their weather drivers, can provide vital information for control and elimination activities. This research aimed to describe temporal patterns in malaria, rainfall and temperature, and to examine the relationships between these variables within four counties of Yunnan Province, China. Methods Plasmodium vivax malaria surveillance data (1991–2006), and average monthly temperature and rainfall were acquired. Seasonal trend decomposition was used to examine secular trends and seasonal patterns in malaria. Distributed lag non-linear models were used to estimate the weather drivers of malaria seasonality, including the lag periods between weather conditions and malaria incidence. Results There was a declining trend in malaria incidence in all four counties. Increasing temperature resulted in increased malaria risk in all four areas and increasing rainfall resulted in increased malaria risk in one area and decreased malaria risk in one area. The lag times for these associations varied between areas. Conclusions The differences detected between the four counties highlight the need for local understanding of seasonal patterns of malaria and its climatic drivers.
Resumo:
BACKGROUND: Dengue fever (DF) is one of the most important emerging arboviral human diseases. Globally, DF incidence has increased by 30-fold over the last fifty years, and the geographic range of the virus and its vectors has expanded. The disease is now endemic in more than 120 countries in tropical and subtropical parts of the world. This study examines the spatiotemporal trends of DF transmission in the Asia-Pacific region over a 50-year period, and identified the disease's cluster areas. METHODOLOGY AND FINDINGS: The World Health Organization's DengueNet provided the annual number of DF cases in 16 countries in the Asia-Pacific region for the period 1955 to 2004. This fifty-year dataset was divided into five ten-year periods as the basis for the investigation of DF transmission trends. Space-time cluster analyses were conducted using scan statistics to detect the disease clusters. This study shows an increasing trend in the spatiotemporal distribution of DF in the Asia-Pacific region over the study period. Thailand, Vietnam, Laos, Singapore and Malaysia are identified as the most likely clusters (relative risk = 13.02) of DF transmission in this region in the period studied (1995 to 2004). The study also indicates that, for the most part, DF transmission has expanded southwards in the region. CONCLUSIONS: This information will lead to the improvement of DF prevention and control strategies in the Asia-Pacific region by prioritizing control efforts and directing them where they are most needed.
Resumo:
What is the state of geographical education in the second decade of the 21st century? This volume presents a selection of peer reviewed papers presented at the 2012 Cologne Congress of the International Geographical Union (IGU) sessions on Geographical Education as representative of current thinking in the area. It then presents (perhaps for the first time) a cross-case analysis of the common factors of all these papers as a current summary of the “state of the art” of geographical education today. The primary aim of the individual authors as well as the editors is not only to record the current state of the art of geographical education but also to promote ongoing discussions of the longer term health and future prospects of international geographical education. We wish to encourage ongoing debate and discussion amongst local, national, regional and international education journals, conferences and discussion groups as part of the international mission of the Commission on Geographical Eduction. While the currency of these chapters in terms of their foci, breadth and recency of the theoretical literature on which they are based and the new research findings they present justifies considerable confidence in the current health of geographical education as an educational and research endeavour, each new publication should only be the start of new scholarly inquiry. Where should we, as a scholarly community, place our energies for the future? If readers are left with a new sense of direction, then the aims of the authors and editors will have been amply met.
Resumo:
Objectives Given increasing trends of obesity being noted from early in life and that active lifestyles track across time, it is important that children at a very young age be active to combat a foundation of unhealthy behaviours forming. This study investigated, within a theory of planned behaviour (TPB) framework, factors which influence mothers’ decisions about their child’s 1) adequate physical activity (PA) and 2) limited screen time behaviours. Methods Mothers (N = 162) completed a main questionnaire, via on-line or paper-based administration, which comprised standard TPB items in addition to measures of planning and background demographic variables. One week later, consenting mothers completed a follow-up telephone questionnaire which assessed the decisions they had made regarding their child’s PA and screen time behaviours during the previous week. Results Hierarchical multiple regression analyses revealed support for the predictive model, explaining an overall 73% and 78% of the variance in mothers’ intention and 38% and 53% of the variance in mothers’ decisions to ensure their child engages in adequate PA and limited screen time, respectively. Attitude and subjective norms predicted intention in both target behaviours, as did intentions with behaviour. Contrary to predictions, perceived behavioural control (PBC) in PA behaviour and planning in screen time behaviour were not significant predictors of intention, neither was PBC a predictor of either behaviour. Conclusions The findings illustrate the various roles that psycho-social factors play in mothers’ decisions to ensure their child engages in active lifestyle behaviours which can help to inform future intervention programs aimed at combating very young children’s inactivity.
Resumo:
Thirteen sites in Deception Bay, Queensland, Australia were sampled three times over a period of 7 months and assessed for contamination by a range of heavy metals, primarily As, Cd, Cr, Cu, Pb and Hg. Fraction analysis, enrichment factors and Principal Components Analysis-Absolute Principal Component Scores (PCA-APCS) analysis were conducted in order to identify the potential bioavailability of these elements of concern and their sources. Hg and Te were identified as the elements of highest enrichment in Deception Bay while marine sediments, shipping and antifouling agents were identified as the sources of the Weak acid Extractable Metals (WE-M), with antifouling agents showing long residence time for mercury contamination. This has significant implications for the future of monitoring and regulation of heavy metal contamination within Deception Bay.