995 resultados para level lifetime
Resumo:
Background: A range of health outcomes at a population level are related to differences in levels of social disadvantage. Understanding the impact of any such differences in palliative care is important. The aim of this study was to assess, by level of socio-economic disadvantage, referral patterns to specialist palliative care and proximity to inpatient services. Methods: All inpatient and community palliative care services nationally were geocoded (using postcode) to one nationally standardised measure of socio-economic deprivation – Socio-Economic Index for Areas (SEIFA; 2006 census data). Referral to palliative care services and characteristics of referrals were described through data collected routinely at clinical encounters. Inpatient location was measured from each person’s home postcode, and stratified by socio-economic disadvantage. Results: This study covered July – December 2009 with data from 10,064 patients. People from the highest SEIFA group (least disadvantaged) were significantly less likely to be referred to a specialist palliative care service, likely to be referred closer to death and to have more episodes of inpatient care for longer time. Physical proximity of a person’s home to inpatient care showed a gradient with increasing distance by decreasing levels of socio-economic advantage. Conclusion: These data suggest that a simple relationship of low socioeconomic status and poor access to a referral-based specialty such as palliative care does not exist. Different patterns of referral and hence different patterns of care emerge.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
NeSSi (network security simulator) is a novel network simulation tool which incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Its capabilities such as profile-based automated attack generation, traffic analysis and support for detection algorithm plug-ins allow it to be used for security research and evaluation purposes. NeSSi has been successfully used for testing intrusion detection algorithms, conducting network security analysis and developing overlay security frameworks. NeSSi is built upon the agent framework JIAC, resulting in a distributed and extensible architecture. In this paper, we provide an overview of the NeSSi architecture as well as its distinguishing features and briefly demonstrate its application to current security research projects.
Resumo:
Background: Ultraviolet radiation exposure during an individuals' lifetime is a known risk factor for the development of skin cancer. However, less evidence is available on assessing the relationship between lifetime sun exposure and skin damage and skin aging. Objectives: This study aims to assess the relationship between lifetime sun exposure and skin damage and skin aging using a non-invasive measure of exposure. Methods: We recruited 180 participants (73 males, 107 females) aged 18-83 years. Digital imaging of skin hyper-pigmentation (skin damage) and skin wrinkling (skin aging) on the facial region was measured. Lifetime sun exposure (presented as hours) was calculated from the participants' age multiplied by the estimated annual time outdoors for each year of life. We analyzed the effects of lifetime sun exposure on skin damage and skin aging. We adjust for the influence of age, sex, occupation, history of skin cancer, eye color, hair color, and skin color. Results: There were non-linear relationships between lifetime sun exposure and skin damage and skin aging. Younger participant's skin is much more sensitive to sun exposure than those who were over 50 years of age. As such, there were negative interactions between lifetime sun exposure and age. Age had linear effects on skin damage and skin aging. Conclusion: The data presented showed that self reported lifetime sun exposure was positively associated with skin damage and skin aging, in particular, the younger people. Future health promotion for sun exposure needs to pay attention to this group for skin cancer prevention messaging. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The issues involved in agricultural biodiversity are important and interesting areas for the application of economic theory. However, very little theoretical and empirical work has been undertaken to understand the benefits of conserving agricultural biodiversity. Accordingly, the main objectives of this PhD thesis are to: (1) Investigate farmers’ valuation of agricultural biodiversity; (2) Identify factors influencing farmers’ demand for agricultural biodiversity; (3) Examine farmers’ demand for biodiversity rich farming systems; (4) Investigate the relationship between agricultural biodiversity and farm level technical efficiency. This PhD thesis investigates these issues by using primary data in small-scale farms, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarized as follows. Firstly, owing to educational and poverty issues of those being interviewed, some policy makers in developed countries question whether non-market valuation techniques such as Choice Experiment (CE) can be applied to developing countries such as Sri Lanka. The CE study in this thesis indicates that carefully designed and pre-tested nonmarket valuation techniques can be applied in developing countries with a high level of reliability. The CE findings support the priori assumption that small-scale farms and their multiple attributes contribute positively and significantly to the utility of farm families in Sri Lanka. Farmers have strong positive attitudes towards increasing agricultural biodiversity in rural areas. This suggests that these attitudes can be the basis on which appropriate policies can be introduced to improve agricultural biodiversity. Secondly, the thesis identifies the factors which influence farmers’ demand for agricultural biodiversity and farmers’ demands on biodiversity rich farming systems. As such they provide important tools for the implementation of policies designed to avoid the loss agricultural biodiversity which is shown to be a major impediment to agricultural growth and sustainable development in a number of developing countries. The results illustrate that certain key household, market and other characteristics (such as agricultural subsidies, percentage of investment of owned money and farm size) are the major determinants of demand for agricultural biodiversity on small-scale farms. The significant household characteristics that determine crop and livestock diversity include household member participation on the farm, off-farm income, shared labour, market price fluctuations and household wealth. Furthermore, it is shown that all the included market characteristics as well as agricultural subsidies are also important determinants of agricultural biodiversity. Thirdly, it is found that when the efficiency of agricultural production is measured in practice, the role of agricultural biodiversity has rarely been investigated in the literature. The results in the final section of the thesis show that crop diversity, livestock diversity and mix farming system are positively related to farm level technical efficiency. In addition to these variables education level, number of separate plots, agricultural extension service, credit access, membership of farm organization and land ownerships are significant and direct policy relevant variables in the inefficiency model. The results of the study therefore have important policy implications for conserving agricultural biodiversity in Sri Lanka.
Resumo:
Objective: To examine the association between individual- and neighborhood-level disadvantage and self-reported arthritis. Methods: We used data from a population-based cross-sectional study conducted in 2007 among 10,757 men and women ages 40–65 years, selected from 200 neighborhoods in Brisbane, Queensland, Australia using a stratified 2-stage cluster design. Data were collected using a mail survey (68.5% response). Neighborhood disadvantage was measured using a census-based composite index, and individual disadvantage was measured using self-reported education, household income, and occupation. Arthritis was indicated by self-report. Data were analyzed using multilevel modeling. Results: The overall rate of self-reported arthritis was 23% (95% confidence interval [95% CI] 22–24). After adjustment for sociodemographic factors, arthritis prevalence was greatest for women (odds ratio [OR] 1.5, 95% CI 1.4–1.7) and in those ages 60–65 years (OR 4.4, 95% CI 3.7–5.2), those with a diploma/associate diploma (OR 1.3, 95% CI 1.1–1.6), those who were permanently unable to work (OR 4.0, 95% CI 3.1–5.3), and those with a household income <$25,999 (OR 2.1, 95% CI 1.7–2.6). Independent of individual-level factors, residents of the most disadvantaged neighborhoods were 42% (OR 1.4, 95% CI 1.2–1.7) more likely than those in the least disadvantaged neighborhoods to self-report arthritis. Cross-level interactions between neighborhood disadvantage and education, occupation, and household income were not significant. Conclusion: Arthritis prevalence is greater in more socially disadvantaged neighborhoods. These are the first multilevel data to examine the relationship between individual- and neighborhood-level disadvantage upon arthritis and have important implications for policy, health promotion, and other intervention strategies designed to reduce the rates of arthritis, indicating that intervention efforts may need to focus on both people and places.
Resumo:
A major priority for cancer control agencies is to reduce geographical inequalities in cancer outcomes. While the poorer breast cancer survival among socioeconomically disadvantaged women is well established, few studies have looked at the independent contribution that area- and individual-level factors make to breast cancer survival. Here we examine relationships between geographic remoteness, area-level socioeconomic disadvantage and breast cancer survival after adjustment for patients’ socio- demographic characteristics and stage at diagnosis. Multilevel logistic regression and Markov chain Monte Carlo simulation were used to analyze 18 568 breast cancer cases extracted from the Queensland Cancer Registry for women aged 30 to 70 years diagnosed between 1997 and 2006 from 478 Statistical Local Areas in Queensland, Australia. Independent of individual-level factors, area-level disadvantage was associated with breast-cancer survival (p=0.032). Compared to women in the least disadvantaged quintile (Quintile 5), women diagnosed while resident in one of the remaining four quintiles had significantly worse survival (OR 1.23, 1.27, 1.30, 1.37 for Quintiles 4, 3, 2 and 1 respectively).) Geographic remoteness was not related to lower survival after multivariable adjustment. There was no evidence that the impact of area-level disadvantage varied by geographic remoteness. At the individual level, Indigenous status, blue collar occupations and advanced disease were important predictors of poorer survival. A woman’s survival after a diagnosis of breast cancer depends on the socio-economic characteristics of the area where she lives, independently of her individual-level characteristics. It is crucial that the underlying reasons for these inequalities be identified to appropriately target policies, resources and effective intervention strategies.
Resumo:
The influence of different electrolyte cations ((Li+, Na+, Mg2+, tetrabutyl ammonium (TBA+)) on the TiO2 conduction band energy (Ec) the effective electron lifetime (τn), and the effective electron diffusion coefficient (Dn) in dye-sensitized solar cells (DSCs) was studied quantitatively. The separation between Ec and the redox Fermi level, EF,redox, was found to decrease as the charge/radius ratio of the cations increased. Ec in the Mg2+ electrolyte was found to be 170 meV lower than that in the Na+ electrolyte and 400 meV lower than that in the TBA+ electrolyte. Comparison of Dn and τn in the different electrolytes was carried out by using the trapped electron concentration as a measure of the energy difference between Ec and the quasi-Fermi level, nEF, under different illumination levels. Plots of Dn as a function of the trapped electron density, nt, were found to be relatively insensitive to the electrolyte cation, indicating that the density and energetic distribution of electron traps in TiO2 are similar in all of the electrolytes studied. By contrast, plots of τn versus nt for the different cations showed that the rate of electron back reaction is more than an order of magnitude faster in the TBA+ electrolyte compared with the Na+ and Li+ electrolytes. The electron diffusion lengths in the different electrolytes followed the sequence of Na+ > Li+ > Mg2+ > TBA+. The trends observed in the AM 1.5 current–voltage characteristics of the DSCs are rationalized on the basis of the conduction band shifts and changes in electron lifetime.
Resumo:
Low-cost level crossings are often criticized as being unsafe. Does a SIL (safety integrity level) rating make the railway crossing any safer? This paper discusses how a supporting argument might be made for low-cost level crossing warning devices with lower levels of safety integrity and issues such as risk tolerability and derivation of tolerable hazard rates for system-level hazards. As part of the design of such systems according to fail-safe principles, the paper considers the assumptions around the pre-defined safe states of existing warning devices and how human factors issues around such states can give rise to additional hazards.
Resumo:
This paper describes an innovative platform that facilitates the collection of objective safety data around occurrences at railway level crossings using data sources including forward-facing video, telemetry from trains and geo-referenced asset and survey data. This platform is being developed with support by the Australian rail industry and the Cooperative Research Centre for Rail Innovation. The paper provides a description of the underlying accident causation model, the development methodology and refinement process as well as a description of the data collection platform. The paper concludes with a brief discussion of benefits this project is expected to provide the Australian rail industry.
Multi-level knowledge transfer in software development outsourcing projects : the agency theory view
Resumo:
In recent years, software development outsourcing has become even more complex. Outsourcing partner have begun‘re- outsourcing’ components of their projects to other outsourcing companies to minimize cost and gain efficiencies, creating a multi-level hierarchy of outsourcing. This research in progress paper presents preliminary findings of a study designed to understand knowledge transfer effectiveness of multi-level software development outsourcing projects. We conceptualize the SD-outsourcing entities using the Agency Theory. This study conceptualizes, operationalises and validates the concept of Knowledge Transfer as a three-phase multidimensional formative index of 1) Domain knowledge, 2) Communication behaviors, and 3) Clarity of requirements. Data analysis identified substantial, significant differences between the Principal and the Agent on two of the three constructs. Using Agency Theory, supported by preliminary findings, the paper also provides prescriptive guidelines of reducing the friction between the Principal and the Agent in multi-level software outsourcing.
Resumo:
The purpose of this study was to determine factors (internal and external) that influenced Canadian provincial (state) politicians when making funding decisions about public libraries. Using the case study methodology, Canadian provincial/state level funding for public libraries in the 2009-10 fiscal year was examined. After reviewing funding levels across the country, three jurisdictions were chosen for the case: British Columbia's budget revealed dramatically decreased funding, Alberta's budget showed dramatically increased funding, and Ontario's budget was unchanged from the previous year. The primary source of data for the case was a series of semi-structured interviews with elected officials and senior bureaucrats from the three jurisdictions. An examination of primary and secondary documents was also undertaken to help set the political and economic context as well as to provide triangulation for the case interviews. The data were analysed to determine whether Cialdini's theory of influence (2001) and specifically any of the six tactics of influence (i.e, commitment and consistency, authority, liking, social proof, scarcity and reciprocity) were instrumental in these budget processes. Findings show the principles of "authority", "consistency and commitment" and "liking" were relevant, and that "liking" were especially important to these decisions. When these decision makers were considering funding for public libraries, they most often used three distinct lenses: the consistency lens (what are my values? what would my party do?), the authority lens (is someone with hierarchical power telling me to do this? are the requests legitimate?), and most importantly, the liking lens (how much do I like and know about the requester?). These findings are consistent with Cialdini's theory, which suggests the quality of some relationships is one of six factors that can most influence a decision maker. The small number of prior research studies exploring the reasons for increases or decreases in public library funding allocation decisions have given little insight into the factors that motivate those politicians involved in the process and the variables that contribute to these decisions. No prior studies have examined the construct of influence in decision making about funding for Canadian public libraries at any level of government. Additionally, no prior studies have examined the construct of influence in decision making within the context of Canadian provincial politics. While many public libraries are facing difficult decisions in the face of uncertain funding futures, the ability of the sector to obtain favourable responses to requests for increases may require a less simplistic approach than previously thought. The ability to create meaningful connections with individuals in many communities and across all levels of government should be emphasised as a key factor in influencing funding decisions.
Resumo:
Health information systems are being implemented in countries by governments and regional health authorities in an effort to modernize healthcare. With these changes, there has emerged a demand by healthcare organizations for nurses graduating from college and university programs to have acquired nursing informatics competencies that would allow them to work in clinical practice settings (e.g. hospitals, clinics, home care etc). In this paper we examine the methods employed by two different countries in developing national level nursing informatics competencies expected of undergraduate nurses prior to graduation (i.e. Australia, Canada). This work contributes to the literature by describing the science and methods of nursing informatics competency development at a national level.
Resumo:
It has been known since Rhodes Fairbridge’s first attempt to establish a global pattern of Holocene sea-level change by combining evidence from Western Australia and from sites in the northern hemisphere that the details of sea-level history since the Last Glacial Maximum vary considerably across the globe. The Australian region is relatively stable tectonically and is situated in the ‘far-field’ of former ice sheets. It therefore preserves important records of post-glacial sea levels that are less complicated by neotectonics or glacio-isostatic adjustments. Accordingly, the relative sea-level record of this region is dominantly one of glacio-eustatic (ice equivalent) sea-level changes. The broader Australasian region has provided critical information on the nature of post-glacial sea level, including the termination of the Last Glacial Maximum when sea level was approximately 125 m lower than present around 21,000–19,000 years BP, and insights into meltwater pulse 1A between 14,600 and 14,300 cal. yr BP. Although most parts of the Australian continent reveals a high degree of tectonic stability, research conducted since the 1970s has shown that the timing and elevation of a Holocene highstand varies systematically around its margin. This is attributed primarily to variations in the timing of the response of the ocean basins and shallow continental shelves to the increased ocean volumes following ice-melt, including a process known as ocean siphoning (i.e. glacio-hydro-isostatic adjustment processes). Several seminal studies in the early 1980s produced important data sets from the Australasian region that have provided a solid foundation for more recent palaeo-sea-level research. This review revisits these key studies emphasising their continuing influence on Quaternary research and incorporates relatively recent investigations to interpret the nature of post-glacial sea-level change around Australia. These include a synthesis of research from the Northern Territory, Queensland, New South Wales, South Australia and Western Australia. A focus of these more recent studies has been the re-examination of: (1) the accuracy and reliability of different proxy sea-level indicators; (2) the rate and nature of post-glacial sea-level rise; (3) the evidence for timing, elevation, and duration of mid-Holocene highstands; and, (4) the notion of mid- to late Holocene sea-level oscillations, and their basis. Based on this synthesis of previous research, it is clear that estimates of past sea-surface elevation are a function of eustatic factors as well as morphodynamics of individual sites, the wide variety of proxy sea-level indicators used, their wide geographical range, and their indicative meaning. Some progress has been made in understanding the variability of the accuracy of proxy indicators in relation to their contemporary sea level, the inter-comparison of the variety of dating techniques used and the nuances of calibration of radiocarbon ages to sidereal years. These issues need to be thoroughly understood before proxy sea-level indicators can be incorporated into credible reconstructions of relative sea-level change at individual locations. Many of the issues, which challenged sea-level researchers in the latter part of the twentieth century, remain contentious today. Divergent opinions remain about: (1) exactly when sea level attained present levels following the most recent post-glacial marine transgression (PMT); (2) the elevation that sea-level reached during the Holocene sea-level highstand; (3) whether sea-level fell smoothly from a metre or more above its present level following the PMT; (4) whether sea level remained at these highstand levels for a considerable period before falling to its present position; or (5) whether it underwent a series of moderate oscillations during the Holocene highstand.