926 resultados para applied German Literature
Resumo:
We analyse the puzzling behavior of the volatility of individual stock returns around the turn of the Millennium. There has been much academic interest in this topic, but no convincing explanation has arisen. Our goal is to pull together the many competing explanations currently proposed in the literature to delermine which, if any, are capable of explaining the volatility trend. We find that many of the different explanations capture the same unusual trend around the Millennium. We find that many of the variables are very highly correlated and it is thus difficult to disentangle their relalive ability to exlplain the time-series behavior in volatility. It seems thai all of the variables that track average volatility well do so mainly by capturing changes in the post-1994 period. These variables have no time-series explanatory power in the pre-1995 years, questioning the underlying idea that any of the explanations currently plesented in the literature can track the trend in volatility over long periods.
Resumo:
The paper investigates the relationship between pro-social norms and its implications for improved environmentsl outcomes. This is an area, which has been neglected in the environmental economic literature. We provide empirical evidence to demonstrate a small but significant positive impact between perceived environmental cooperation (reduced public littering) and increased voluntary environmental morale. For this purpose we use European Value Survey (EVS) data for 30 European countries. We also demonstrate that Western European countries are more sensitive to perceived environmental cooperation than the public in Eastern Europe. Interestingly, the results also demonstrate that environmental morale is strongly correlated with several socio-economic and environmental variables. Several robustness tests are conducted to check the validity of the results.
Resumo:
This paper studies the evolution of tax morale in Spain in the post-France era. In contrast to the previous tax compliance literature, the current paper investigates tax morale as the dependent variable and attempts to answer what actually shapes tax morale. Te analysis uses suevey data from two sources; the World Values Survey and the European Values Survey, allowing us to observe tax morale in Spain for the years 1981,1990, 1995 and 1999/2000. The sutudy of evolution of tax morale in Spain over nearly a 20-year span is particularly interesting because the political and fiscal system evolved very rapidly during this period.
Resumo:
The aim of this paper is to contribute to the understanding of various models used in research for the adoption and diffusion of information technology in small and medium-sized enterprises (SMEs). Starting with Rogers' diffusion theory and behavioural models, technology adoption models used in IS research are discussed. Empirical research has shown that the reasons why firms choose to adopt or not adopt technology is dependent on a number of factors. These factors can be categorised as owner/manager characteristics, firm characteristics and other characteristics. The existing models explaining IS diffusion and adoption by SMEs overlap and complement each other. This paper reviews the existing literature and proposes a comprehensive model which includes the whole array of variables from earlier models.
Resumo:
Background Despite being the leading cause of death and disability in the paediatric population, traumatic brain injury (TBI) in this group is largely understudied. Clinical practice within the paediatric intensive care unit (PICU) has been based upon adult guidelines however children are significantly different in terms of mechanism, pathophysiology and consequence of injury. Aim To review TBI management in the PICU and gain insight into potential management strategies. Method To conduct this review, a literature search was conducted using MEDLINE, PUBMED and The Cochrane Library using the following key words; traumatic brain injury; paediatric; hypothermia. There were no date restrictions applied to ensure that past studies, whose principles remain current were not excluded. Results Three areas were identified from the literature search and will be discussed against current acknowledged treatment strategies: Prophylactic hypothermia, brain tissue oxygen tension monitoring and decompressive craniectomy. Conclusion Previous literature has failed to fully address paediatric specific management protocols and we therefore have little evidence-based guidance. This review has shown that there is an emerging and ongoing trend towards paediatric specific TBI research in particular the area of moderate prophylactic hypothermia (MPH).
Resumo:
A plethora of literature exists on irrigation development. However, only a few studies analyse the distributional issues associated with irrigation induced technological changes (IITC) in the context of commodity markets. Furthermore, these studies deal with only the theoretical arguments and to date no proper investigation has been conducted to examine the long-term benefits of adopting modern irrigation technology. This study investigates the long-term benefit changes of irrigation induced technological changes using data from Sri Lanka with reference to rice farming. The results show that (1) adopting modern technology on irrigation increases the overall social welfare through consumption of a larger quantity at a lower cost (2) the magnitude, sensitivity and distributional gains depend on the price elasticity of demand and supply as well as the size of the marketable surplus (3) non-farm sector gains are larger than farm sector gains (4) the distribution of the benefits among different types of producers depend on the magnitude of the expansion of the irrigated areas as well as the competition faced by traditional farmers (5) selective technological adoption and subsidies have a detrimental effect on the welfare of other producers who do not enjoy the same benefits (6) the short-term distributional effects are more severe than the long-term effects among different groups of farmers.
Resumo:
Existing literature has failed to find robust relationships between individual differences and the ability to fake psychological tests, possibly due to limitations in how successful faking is operationalised. In order to fake, individuals must alter their original profile to create a particular impression. Currently, successful faking is operationalised through statistical definitions, informant ratings, known groups comparisons, the use of archival and baseline data, and breaches of validity indexes. However, there are many methodological limitations to these approaches. This research proposed a three component model of successful faking to address this, where an original response is manipulated into a strategic response, which must match a criteria target. Further, by operationalising successful faking in this manner, this research takes into account the fact that individuals may have been successful in reaching their implicitly created profile, but that this may not have matched the criteria they were instructed to fake.Participants (N=48, 22 students and 26 non-students) completed the BDI-II honestly. Participants then faked the BDI-II as if they had no, mild, moderate and severe depression, as well as completing a checklist revealing which symptoms they thought indicated each level of depression. Findings were consistent with a three component model of successful faking, where individuals effectively changed their profile to what they believed was required, however this profile differed from the criteria defined by the psychometric norms of the test.One of the foremost issues for research in this area is the inconsistent manner in which successful faking is operationalised. This research allowed successful faking to be operationalised in an objective, quantifiable manner. Using this model as a template may allow researchers better understanding of the processes involved in faking, including the role of strategies and abilities in determining the outcome of test dissimulation.
Resumo:
AIM: To draw on empirical evidence to illustrate the core role of nurse practitioners in Australia and New Zealand. BACKGROUND: Enacted legislation provides for mutual recognition of qualifications, including nursing, between New Zealand and Australia. As the nurse practitioner role is relatively new in both countries, there is no consistency in role expectation and hence mutual recognition has not yet been applied to nurse practitioners. A study jointly commissioned by both countries' Regulatory Boards developed information on the core role of the nurse practitioner, to develop shared competency and educational standards. Reporting on this study's process and outcomes provides insights that are relevant both locally and internationally. METHOD: This interpretive study used multiple data sources, including published and grey literature, policy documents, nurse practitioner program curricula and interviews with 15 nurse practitioners from the two countries. Data were analysed according to the appropriate standard for each data type and included both deductive and inductive methods. The data were aggregated thematically according to patterns within and across the interview and material data. FINDINGS: The core role of the nurse practitioner was identified as having three components: dynamic practice, professional efficacy and clinical leadership. Nurse practitioner practice is dynamic and involves the application of high level clinical knowledge and skills in a wide range of contexts. The nurse practitioner demonstrates professional efficacy, enhanced by an extended range of autonomy that includes legislated privileges. The nurse practitioner is a clinical leader with a readiness and an obligation to advocate for their client base and their profession at the systems level of health care. CONCLUSION: A clearly articulated and research informed description of the core role of the nurse practitioner provides the basis for development of educational and practice competency standards. These research findings provide new perspectives to inform the international debate about this extended level of nursing practice. RELEVANCE TO CLINICAL PRACTICE: The findings from this research have the potential to achieve a standardised approach and internationally consistent nomenclature for the nurse practitioner role.
Resumo:
Post-concussion syndrome (PCS) is a controversial constellation of cognitive, emotional, and physical symptoms that some patients experience following a mild traumatic brain injury or concussion. PCS-like symptoms are commonly found in individuals with depression, pain, and stress, as well as healthy individuals. This study investigated the base rate of PCS symptoms in a healthy sample of 96 participants and examined the relationship between these symptoms, depression, and sample demographics. PCS symptoms were assessed using the British-Columbia Post-Concussion Symptom Inventory. Depression was measured using the Beck Depression Inventory II. Results demonstrated that: The base rate of PCS was very high; there was a strong positive relationship between depression and PCS; and demographic characteristics were not related to PCS in this sample. These findings are broadly consistent with literature suggesting a significant role for non-neurological factors in the expression of PCS symptomatology. This study adds to the growing body of literature that calls for caution in the clinical interpretation of results from PCS symptom inventories.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia
Resumo:
A mathematical model is developed to simulate the discharge of a LiFePO4 cathode. This model contains 3 size scales, which match with experimental observations present in the literature on the multi-scale nature of LiFePO4 material. A shrinking-core is used on the smallest scale to represent the phase-transition of LiFePO4 during discharge. The model is then validated against existing experimental data and this validated model is then used to investigate parameters that influence active material utilisation. Specifically the size and composition of agglomerates of LiFePO4 crystals is discussed, and we investigate and quantify the relative effects that the ionic and electronic conductivities within the oxide have on oxide utilisation. We find that agglomerates of crystals can be tolerated under low discharge rates. The role of the electrolyte in limiting (cathodic) discharge is also discussed, and we show that electrolyte transport does limit performance at high discharge rates, confirming the conclusions of recent literature.
Resumo:
Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.
Resumo:
In this research I have examined how ePortfolios can be designed for Music postgraduate study through a practice led research enquiry. This process involved designing two Web 2.0 ePortfolio systems for a group of five post graduate music research students. The design process revolved around the application of an iterative methodology called Software Develop as Research (SoDaR) that seeks to simultaneously develop design and pedagogy. The approach to designing these ePortfolio systems applied four theoretical protocols to examine the use of digitised artefacts in ePortfolio systems to enable a dynamic and inclusive dialogue around representations of the students work. The research and design process involved an analysis of existing software and literature with a focus upon identifying the affordances of available Web 2.0 software and the applications of these ideas within 21st Century life. The five post graduate music students each posed different needs in relation to the management of digitised artefacts and the communication of their work amongst peers, supervisors and public display. An ePortfolio was developed for each of them that was flexible enough to address their needs within the university setting. However in this first SoDaR iteration data gathering phase I identified aspects of the university context that presented a negative case that impacted upon the design and usage of the ePortfolios and prevented uptake. Whilst the portfolio itself functioned effectively, the university policies and technical requirements prevented serious use. The negative case analysis of the case study found revealed that Access and Control and Implementation, Technical and Policy Constraints protocols where limiting user uptake. From the semistructured interviews carried out as part of this study participant feedback revealed that whilst the participants did not use the ePortfolio system I designed, each student was employing Web 2.0 social networking and storage processes in their lives and research. In the subsequent iterations I then designed a more ‘ideal’ system that could be applied outside of the University context that draws upon the employment of these resources. In conclusion I suggest recommendations about ePortfolio design that considers what the applications of the theoretical protocols reveal about creative arts settings. The transferability of these recommendations are of course dependent upon the reapplication of the theoretical protocols in a new context. To address the mobility of ePortfolio design between Institutions and wider settings I have also designed a prototype for a business card sized USB portal for the artists’ ePortfolio. This research project is not a static one; it stands as an evolving design for a Web 2.0 ePortfolio that seeks to refer to users needs, institutional and professional contexts and the development of software that can be incorporated within the design. What it potentially provides to creative artist is an opportunity to have a dialogue about art with artefacts of the artist products and processes in that discussion.