320 resultados para Non-traditional Work
Resumo:
This chapter analyses the copyright law framework needed to ensure open access to outputs of the Australian academic and research sector such as journal articles and theses. It overviews the new knowledge landscape, the principles of copyright law, the concept of open access to knowledge, the recently developed open content models of copyright licensing and the challenges faced in providing greater access to knowledge and research outputs.
Resumo:
Non-use values (i.e. economic values assigned by individuals to ecosystem goods and services unrelated to current or future uses) provide one of the most compelling incentives for the preservation of ecosystems and biodiversity. Assessing the non-use values of non-users is relatively straightforward using stated preference methods, but the standard approaches for estimating non-use values of users (stated decomposition) have substantial shortcomings which undermine the robustness of their results. In this paper, we propose a pragmatic interpretation of non-use values to derive estimates that capture their main dimensions, based on the identification of a willingness to pay for ecosystem protection beyond one's expected life. We empirically test our approach using a choice experiment conducted on coral reef ecosystem protection in two coastal areas in New Caledonia with different institutional, cultural, environmental and socio-economic contexts. We compute individual willingness to pay estimates, and derive individual non-use value estimates using our interpretation. We find that, a minima, estimates of non-use values may comprise between 25 and 40% of the mean willingness to pay for ecosystem preservation, less than has been found in most studies.
Resumo:
The novel pyrazolo[3,4-d]pyrimidine compound GU285 (4-amino-6-alpha-carbamoylethylthio-1- phenylpyrazolo[3,4-d]pyrimidine, CAS 134896-40-5) was examined for its ability (1) to inhibit binding of adenosine (ADO) receptor ligands in rat brain membranes, (2) to antagonise functional responses to ADO agonists in rat right and left atria and coronary resistance vessels, and (3) to reduce the fall in heart rate and arterial blood pressure produced by the ADO A1 agonist N6-cyclopentyladenosine (CPA) in the intact, anaesthetized rat. GU285 competitively inhibited binding of the ADO A1 agonist [3H]-R-N6-phenylisopropyladenosine (R-PIA) yielding a Ki value of 11 (7-18) nmol.l-1 (geometric mean +/- 95% Cl). When assayed against the ADO A2A selective agonist [3H]-2-[p-(2-carboxyethyl)- phenethylamino]-5'-N-ethylcarboxamidoadenosine, (CGS21680), a Ki of 15 (10-24) nmol.l-1 was obtained. In spontaneously beating right atria, GU285 competitively antagonized negative chronotropic effects of R-PIA with a pA2 of 8.7 +/- 0.3 and in electrically paced left atria, GU285 competitively antagonized negative inotropic effects of R-PIA with a pA2 of 9.0 +/- 0.1. In the potassium-arrested, perfused rat heart GU285 (1 mumol.l-1) antagonized only the high sensitivity, ADO A2B mediated component of the biphasic relaxation of the coronary vasculature produced by NECA. The low sensitivity component was unchanged. GU285 (1 mumol.kg-1) antagonized the negative chronotropic and hypotensive effects of the adenosine A1 agonist CPA in anaesthetized rats, producing a 10-fold rightward shift in the dose-response relationship. These data demonstrate that in the rat, GU285 is a potent, non-selective adenosine receptor antagonist that maintains its activity in vivo.
Not just what they want, but why they want it: Traditional market research to deep customer insights
Resumo:
Purpose This paper explores advantages and disadvantages of both traditional market research and deep customer insight methods in order to lay the platform for revealing how a relationship between these two domains could be optimised during firm-based innovation. Design/methodology/approach The paper reports on an empirical research study conducted with thirteen Australian based firms engaged in a design-led approach to innovation. Firms were facilitated through a design-led approach where the process of gathering deep customer insights was isolated and investigated further in comparison to traditional market research methods. Findings Results show that deep customer insight methods are able to provide fresh, non-obvious ways of understanding customer needs, problems and behaviours that can become the foundation of new business opportunities. Findings concluded that deep customer insights methods provide the critical layer to understand why customers do and don’t engage with businesses. Revealing why was not accessible in traditional market research methods. Research limitations/implications The theoretical outcome of this study is a complementary methods matrix, providing guidance on appropriate implementation of research methods in accordance with a project’s timeline to optimise the complementation of traditional market research methods with design-led customer engagement methods. Practical implications Deep customer insight methods provide fresh, non-obvious ways of understanding customer needs, problems and behaviours that can become the foundation of new business opportunities. It is hoped that those in a position of data collection are encouraged to experiment and use deep customer insight methods to connect with their customers on a meaningful level and translate these insights into value. Originality/value This paper provides original value to a new understanding how design techniques can be applied to compliment and strengthen existing market research strategies. This is crucial in an era where business competition hinges on a subtle and often intimate understanding of customer needs and behaviours.
Resumo:
Non-thermal plasma (NTP) has been introduced over the past several years as a promising method for nitrogen oxide (NOx) removal. The intent, when using NTP, is to selectively transfer input electrical energy to the electrons, and to not expend this in heating the entire gas stream, which generates free radicals through collisions, and promotes the desired chemical changes in the exhaust gases. The generated active species react with the pollutant molecules and decompose them. This paper reviews and summarizes relevant literature regarding various aspects of the application of {NTP} technology on {NOx} removal from exhaust gases. A comprehensive description of available scientific literature on {NOx} removal using {NTP} technology is presented, including various types of NTP, e.g. dielectric barrier discharge, corona discharge and electron beam. Furthermore, the combination of {NTP} with catalyst and adsorbent for better {NOx} removal efficiency is presented in detail. The removal of {NOx} from both simulated gases and real diesel engines is also considered in this review paper. As {NTP} is a new technique and is not yet commercialized, there is a need for more studies to be performed in this field.
Resumo:
Lecturing is a traditional method for teaching in discipline-based teaching environments and its success in legal discipline depends upon its alignment with learner backgrounds, learning objectives and the lecturing approaches utilised in the classes. In a situation where students do not have any prior knowledge of the given discipline that requires a particular lecturing approach, a mismatch in such an alignment would place learner knowledge acquisition into a challenging situation. From this perspective, this study tests the suitability of two dominant lecturing approaches—the case and the law-based lecturing approaches. It finds that a lecturer should put more emphasis on the case-based approach while lecturing to non-law background business students at the postgraduate level, provided that such an emphasis should be relative to the cognitive ability of the students and their motivation for learning law units.
Resumo:
Since 2009, all Australian states require young people to be ‘earning or learning’ until age 17. Secondary schools and vocational colleges now accommodate students for whom the conventional academic pathways of the past were not designed. The paper reflects on a project designed to explore the moral orders in these institutional settings for managing such students in extended compulsory schooling. Originally designed as classroom ethnographies, the project involved observations over three to four weeks and interviews with teachers and students in five sites in towns experiencing high youth unemployment. The project aimed to support teachers to work productively in such classrooms with such students, under the assumption that teachers orchestrate classroom interactions. However, it became clear events in these classrooms were being shaped by relations and parties above and beyond the classroom, as much as by those present. Teachers and students were observed to both comply with, and push against, the layers of policy and institutional processes regulating their behaviours. This paper re-thinks the original project through the gaze and resources of institutional ethnography, to better account for the layers of accountabilities and documentation practices that impacted on both teacher and student behaviours. By tracing the extended webs of ‘ruling relations’, it shows both how teachers and students could make trouble for the institutional moral order, and then be held accountable for this trouble.
Resumo:
This study is seeking to investigate the effect of non-thermal plasma technology in the abatement of particulate matter (PM) from the actual diesel exhaust. Ozone (O3) strongly promotes PM oxidation, the main product of which is carbon dioxide (CO2). PM oxidation into the less harmful product (CO2) is the main objective whiles the correlation between PM, O3 and CO2 is considered. A dielectric barrier discharge reactor has been designed with pulsed power technology to produce plasma inside the diesel exhaust. To characterise the system under varied conditions, a range of applied voltages from 11 kVPP to 21kVPP at repetition rates of 2.5, 5, 7.5 and 10 kHz, have been experimentally investigated. The results show that by increasing the applied voltage and repetition rate, higher discharge power and CO2 dissociation can be achieved. The PM removal efficiency of more than 50% has been achieved during the experiments and high concentrations of ozone on the order of a few hundreds of ppm have been observed at high discharge powers. Furthermore, O3, CO2 and PM concentrations at different plasma states have been analysed for time dependence. Based on this analysis, an inverse relationship between ozone concentration and PM removal has been found and the role of ozone in PM removal in plasma treatment of diesel exhaust has been highlighted.
Resumo:
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
Resumo:
Background The VEGF pathway has become an important therapeutic target in lung cancer, where VEGF has long been established as a potent pro-angiogenic growth factor expressed by many types of tumors. While Bevacizumab (Avastin) has proven successful in increasing the objective tumor response rate and in prolonging progression and overall survival in patients with NSCLC, the survival benefit is however relatively short and the majority of patients eventually relapse. The current use of tyrosine kinase inhibitors alone and in combination with chemotherapy has been underwhelming, highlighting an urgent need for new targeted therapies. In this study, we examined the mechanisms of VEGF-mediated survival in NSCLC cells and the role of the Neuropilin receptors in this process. Methods NSCLC cells were screened for expression of VEGF and its receptors. The effects of recombinant VEGF and its blockade on lung tumor cell proliferation and cell cycle were examined. Phosphorylation of Akt and Erk1/2 proteins was examined by high content analysis and confocal microscopy. The effects of silencing VEGF on cell proliferation and survival signaling were also assessed. A Neuropilin-1 stable-transfected cell line was generated. Cell growth characteristics in addition to pAkt and pErk1/2 signaling were studied in response to VEGF and its blockade. Tumor growth studies were carried out in nude mice following subcutaneous injection of NP1 over-expressing cells. Results Inhibition of the VEGF pathway with anti-VEGF and anti-VEGFR-2 antibodies or siRNA to VEGF, NP1 and NP2 resulted in growth inhibition of NP1 positive tumor cell lines associated with down-regulation of PI3K and MAPK kinase signaling. Stable transfection of NP1 negative cells with NP1 induced proliferation in vitro, which was further enhanced by exogenous VEGF. In vivo, NP1 over-expressing cells significantly increased tumor growth in xenografts compared to controls. Conclusions Our data demonstrate that VEGF is an autocrine growth factor in NSCLC signaling, at least in part, through NP1. Targeting this VEGF receptor may offer potential as a novel therapeutic approach and also support the evaluation of the role of NP1 as a biomarker predicting sensitivity or resistance to VEGF and VEGFR-targeted therapies in the clinical arena.
Resumo:
This research identifies the commuting mode choice behaviour of 3537 adults living in different types of transit oriented development (TOD) in Brisbane by disentangling the effects of their “evil twin” transit adjacent developments (TADs), and by also controlling for residential self-selection, travel attitudes and preferences, and socio-demographic effects. A TwoStep cluster analysis was conducted to identify the natural groupings of respondents’ living environment based on six built environment indicators. The analysis resulted in five types of neighbourhoods: urban TODs, activity centre TODs, potential TODs, TADs, and traditional suburbs. HABITAT survey data were used to derive the commute mode choice behaviour of people living in these neighbourhoods. In addition, statements reflecting both respondents’ travel attitudes and living preferences were also collected as part of the survey. Factor analyses were conducted based on these statements and these derived factors were then used to control for residential self-selection. Four binary logistic regression models were estimated, one for each of the travel modes used (e.g. public transport, active transport, less sustainable transport such as the car/taxi, and other), to differentiate between the commuting behaviour of people living in the five types of neighbourhoods. The findings verify that urban TODs enhance the use of public transport and reduce car usage. No significant difference was found in the commuting behaviour between respondents living in traditional suburbs and TADs. The results confirm the hypothesis that TADs are the “evil twin” of TODs. The data indicates that TADs and the mode choices of residents in these neighbourhoods is a missed transport policy opportunity. Further policy efforts are required for a successive transition of TADs into TODs in order to realise the full benefits of these. TOD policy should also be integrated with context specific TOD design principles.
Resumo:
Background Physical conditions through gait and other functional task are parameters to consider for frailty detection. The aim of the present study is to measure and describe the variability of acceleration, angular velocity and trunk displacement in the ten meter Extended Timed Get-Up-and-Go test in two groups of frail and non-frail elderly people through instrumentation with the iPhone4® smartphone. Secondly, to analyze the differences and performance of the variance between the study groups (frail and non-frail). This is a cross-sectional study of 30 subjects aged over 65 years, 14 frail subjects and 16 non-frail subjects. Results The highest difference between groups in the Sit-to-Stand and Stand-to-Sit subphases was in the y axis (vertical vector). The minimum acceleration in the Stand-to-Sit phase was -2.69 (-4.17 / -0.96) m/s2 frail elderly versus -8.49 (-12.1 / -5.23) m/s2 non-frail elderly, p < 0.001. In the Gait Go and Gait Come subphases the biggest differences found between the groups were in the vertical axis: -2.45 (-2.77 /-1.89) m/s2 frail elderly versus -5.93 (-6.87 / -4.51) m/s2 non-frail elderly, p < 0.001. Finally, with regards to the turning subphase, the statistically significant differences found between the groups were greater in the data obtained from the gyroscope than from the accelerometer (the gyroscope data for the mean maximum peak value for Yaw movement angular velocity in the frail elderly was specifically 25.60°/s, compared to 112.8°/s for the non-frail elderly, p < 0.05). Conclusions The inertial sensor fitted in the iPhone4® is capable of studying and analyzing the kinematics of the different subphases of the Extended Timed Up and Go test in frail and non-frail elderly people. For the Extended Timed Up and Go test, this device allows more sensitive differentiation between population groups than the traditionally used variable, namely time.
Resumo:
Creative and ad-hoc work often involves non-digital artifacts, such as whiteboards and post-it notes. The preferred method of brainstorming and idea development, while facilitating work among collocated participants, makes it particularly tricky to involve remote participants, not even mentioning cases where live social involvement is required and the number and location of remote participants can be vast. Our work has originally focused on large distributed teams in business entities. Vast majority of teams in large organizations are distributed teams. Our team of corporate researchers decided to identify state of the art technologies that could facilitate the scenarios mentioned above. This paper is an account of a research project in the area of enterprise collaboration, with a strong focus on the aspects of human computer interaction in mixed mode environments, especially in areas of collaboration where computers still play a secondary role. It is describing a currently running corporate research project. In this paper we signal the potential use of the technology in situation, where community involvement is either required or desirable. The goal of the paper is to initiate a discussion on the use of technologies, initially designed as supporting enterprise collaboration, in situation requiring community engagement. In other words, it is a contribution of technically focused research exploring the uses of the technology in areas such as social engagement and community involvement. © 2012 IEEE.
Resumo:
Fusing data from multiple sensing modalities, e.g. laser and radar, is a promising approach to achieve resilient perception in challenging environmental conditions. However, this may lead to \emph{catastrophic fusion} in the presence of inconsistent data, i.e. when the sensors do not detect the same target due to distinct attenuation properties. It is often difficult to discriminate consistent from inconsistent data across sensing modalities using local spatial information alone. In this paper we present a novel consistency test based on the log marginal likelihood of a Gaussian process model that evaluates data from range sensors in a relative manner. A new data point is deemed to be consistent if the model statistically improves as a result of its fusion. This approach avoids the need for absolute spatial distance threshold parameters as required by previous work. We report results from object reconstruction with both synthetic and experimental data that demonstrate an improvement in reconstruction quality, particularly in cases where data points are inconsistent yet spatially proximal.
Resumo:
Interfaces for children have continued to evolve in terms of complexity, with toys ranging from traditional tangible interfaces to apps with digital interfaces and hybrid toys with mixed physical and digital interfaces. However, there is limited research done to investigate their potential for intuitive use. This research study compares a tangible toy and an equivalent toy in the digital world (app) for intuitive use. Non-parametric Mann-Whitney U test results showed that the tangible toy was more intuitive than the intangible counter part. Tangible systems are less complex to use and they require less time to encode and retrieve associated knowledge to use them intuitively. They are associated with low domain transfer distance and easily discoverable features. Intangible interfaces, on the other ha nd, require greater complexity and time to encode and retrieve associated experiential knowledge. Intangibles are associated with larger domain transfer distance and undiscoverable features which affects their intuitive use. Design implications and future work are discussed, emphasising the need for investigating aspects that make tangible systems intuitive to use.