934 resultados para Rent dependency
Resumo:
The firm is faced with a decision concerning the nature of intra-organizational exchange relationships with internal human resources and the nature or inter-organizational exchange relationships with market firms. In both situations, the firm can develop an exchange that ranges from a discrete exchange to a relational exchange. Transaction Cost Economics (TCE) and the Resource Dependency View (RDV) represent alternative efficiency-based explanations fo the nature of the exchange relationship. The aim of the paper is to test these two theories in respect of air conditioning maintenance in retail centres. Multiple sources of information are genereated from case studies of Australian retail centres to test these theories in respoect of internalized operations management (concerning strategic aspects of air conditioning maintenance) and externalized planned routine air conditioning maintenance. The analysis of the data centres on pattern matching. It is concluded that the data supports TCE - on the basis of a development in TCE's contractual schema. Further research is suggested towards taking a pluralistic stance and developing a combined efficiency and power hypothesis - upon which Williamson has speculated. For practice, the conclusions also offer a timely cautionary note concerning the adoption of one approach in all exchange relationships.
Resumo:
Much of the literature on clusters has focused on the economic advantages of clusters and how these can be achieved in terms of competition, regional development and local spillovers. Some studies have focused at the level of the individual firm however human resource management (HRM) in individual clustered firms has received scant attention. This paper innovatively utilises the extended Resource Based View (RBV) of the firm as a framework to conceptualise the human resource processes of individual firms within a cluster. RBV is argued as a useful tool as it explains external rents outside a firm’s boundaries. The paper concludes that HRM can assist in generating rents for firms and clusters more broadly when the function supports valuable interfirm relationships important for realising inter-firm advantages.
Resumo:
Alcohol and drug dependency is a widespread health and social issue encountered by registered nurses in contemporary practice. A study aiming to describe the experiences of registered nurses working in an alcohol and drug unit in South East Queensland was implemented. Data were analysed via Giorgi’s phenomenological method and an unexpected but significant finding highlighted the frustration felt by registered nurses regarding experiences of stigma they identified in their daily work encounters. Secondary analysis confirmed the phenomenon of stigma with three themes: (1) inappropriate judgement; (2) advocacy; and (3) education. Resultantly, findings concluded registered nurses’ working in this field need to become advocates for their clients, ensuring professional conduct is upheld at all times. This paper recommends that stigma could be addressed by incorporating alcohol and other drug dependency subjects and clinical placements into the curriculum of the Bachelor of Nursing degrees, and in-services for all practising registered nurses.
Resumo:
Aim: Increased car dependency amongst Australia's ageing population may result in increased social isolation and other health impacts associated with the cessation of driving. While public transport represents an alternative to car usage, patronage remains low amongst senior cohorts. This study investigates the facilitators and barriers to public transport patronage and the nature of car dependence among older Australians. Method: Data was gathered from a sample of 24 adults (mean = 70.33 years) through a combination of quantitative (remote behavioural observation) and qualitative (interviews) investigation. Results: Findings suggest factors of relative convenience, affordability and health/mobility dictate choices of transport mode. The car is considered more convenient for the majority of suburban trips irrespective of the availability of public transport. Conclusion: Policy attention should focus on providing better education and information regarding driving cessation and addressing aged-specific social aspects of public transport including the accommodation of various health and mobility issues.
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
Long-term changes in the genetic composition of a population occur by the fixation of new mutations, a process known as substitution. The rate at which mutations arise in a population and the rate at which they are fixed are expected to be equal under neutral conditions (Kimura, 1968). Between the appearance of a new mutation and its eventual fate of fixation or loss, there will be a period in which it exists as a transient polymorphism in the population (Kimura and Ohta, 1971). If the majority of mutations are deleterious (and nonlethal), the fixation probabilities of these transient polymorphisms are reduced and the mutation rate will exceed the substitution rate (Kimura, 1983). Consequently, different apparent rates may be observed on different time scales of the molecular evolutionary process (Penny, 2005; Penny and Holmes, 2001). The substitution rate of the mitochondrial protein-coding genes of birds and mammals has been traditionally recognized to be about 0.01 substitutions/site/million years (Myr) (Brown et al., 1979; Ho, 2007; Irwin et al., 1991; Shields and Wilson, 1987), with the noncoding D-loop evolving several times more quickly (e.g., Pesole et al., 1992; Quinn, 1992). Over the past decade, there has been mounting evidence that instantaneous mutation rates substantially exceed substitution rates, in a range of organisms (e.g., Denver et al., 2000; Howell et al., 2003; Lambert et al., 2002; Mao et al., 2006; Mumm et al., 1997; Parsons et al., 1997; Santos et al., 2005). The immediate reaction to the first of these findings was that the polymorphisms generated by the elevated mutation rate are short-lived, perhaps extending back only a few hundred years (Gibbons, 1998; Macaulay et al., 1997). That is, purifying selection was thought to remove these polymorphisms very rapidly.
Resumo:
Analysis of Wikipedia's inter-language links provides insight into a new mechanism of knowledge sharing and linking worldwide.
Resumo:
Within the cardiac high dependency unit it is currently a member of the surgical team who makes the decision for a patient's chest drain to be removed after cardiac surgery. This has often resulted in delays in discharging one patient and therefore in admitting the next. A pilot study was carried out using a working standard that had been developed, incorporating an algorithmic model. The results have enabled nursing staff in a cardiac high dependency unit to undertake this responsibility independently.
Resumo:
This paper examines the use of crowdfunding platforms to fund academic research. Looking specifically at the use of a Pozible campaign to raise funds for a small pilot research study into home education in Australia, the paper reports on the success and problems of using the platform. It also examines the crowdsourcing of literature searching as part of the package. The paper looks at the realities of using this type of platform to gain start–up funding for a project and argues that families and friends are likely to be the biggest supporters. The finding that family and friends are likely to be the highest supporters supports similar work in the arts communities that are traditionally served by crowdfunding platforms. The paper argues that, with exceptions, these platforms can be a source of income in times where academics are finding it increasingly difficult to source government funding for projects.
Resumo:
In the current market, extensive software development is taking place and the software industry is thriving. Major software giants have stated source code theft as a major threat to revenues. By inserting an identity-establishing watermark in the source code, a company can prove it's ownership over the source code. In this paper, we propose a watermarking scheme for C/C++ source codes by exploiting the language restrictions. If a function calls another function, the latter needs to be defined in the code before the former, unless one uses function pre-declarations. We embed the watermark in the code by imposing an ordering on the mutually independent functions by introducing bogus dependency. Removal of dependency by the attacker to erase the watermark requires extensive manual intervention thereby making the attack infeasible. The scheme is also secure against subtractive and additive attacks. Using our watermarking scheme, an n-bit watermark can be embedded in a program having n independent functions. The scheme is implemented on several sample codes and performance changes are analyzed.
Resumo:
Purpose – Context-awareness has emerged as an important principle in the design of flexible business processes. The goal of the research is to develop an approach to extend context-aware business process modeling toward location-awareness. The purpose of this paper is to identify and conceptualize location-dependencies in process modeling. Design/methodology/approach – This paper uses a pattern-based approach to identify location-dependency in process models. The authors design specifications for these patterns. The authors present illustrative examples and evaluate the identified patterns through a literature review of published process cases. Findings – This paper introduces location-awareness as a new perspective to extend context-awareness in BPM research, by introducing relevant location concepts such as location-awareness and location-dependencies. The authors identify five basic location-dependent control-flow patterns that can be captured in process models. And the authors identify location-dependencies in several existing case studies of business processes. Research limitations/implications – The authors focus exclusively on the control-flow perspective of process models. Further work needs to extend the research to address location-dependencies in process data or resources. Further empirical work is needed to explore determinants and consequences of the modeling of location-dependencies. Originality/value – As existing literature mostly focusses on the broad context of business process, location in process modeling still is treated as “second class citizen” in theory and in practice. This paper discusses the vital role of location-dependencies within business processes. The proposed five basic location-dependent control-flow patterns are novel and useful to explain location-dependency in business process models. They provide a conceptual basis for further exploration of location-awareness in the management of business processes.
Resumo:
Plant-parasitic nematodes are important pests of horticultural crops grown in tropical and subtropical regions of Australia. Burrowing nematode (Radopholus similis) is a major impediment to banana production and root-knot nematodes (predominantly Meloidogyne javanica and M. incognita) cause problems on pineapple and a range of annual vegetables, including tomato, capsicum, zucchini, watermelon, rockmelon, potato and sweet potato. In the early 1990s, nematode control in these industries was largely achieved with chemicals, with methyl bromide widely used on some subtropical vegetable crops, ethylene dibromide applied routinely to pineapples and non-volatile nematicides such as fenamiphos applied up to four times a year in banana plantations. This paper discusses the research and extension work done over the last 15 years to introduce an integrated pest management approach to nematode control in tropical and subtropical horticulture. It then discusses various components of current integrated pest management programs, including crop rotation, nematode monitoring, clean planting material, organic amendments, farming systems to enhance biological suppression of nematodes and judicious use of nematicides. Finally, options for improving current management practices are considered.
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.