485 resultados para value drivers
Resumo:
We examined differences in response latencies obtained during a validated video-based hazard perception driving test between three healthy, community-dwelling groups: 22 mid-aged (35-55 years), 34 young-old (65-74 years), and 23 old-old (75-84 years) current drivers, matched for gender, education level, and vocabulary. We found no significant difference in performance between mid-aged and young-old groups, but the old-old group was significantly slower than the other two groups. The differences between the old-old group and the other groups combined were independently mediated by useful field of view (UFOV), contrast sensitivity, and simple reaction time measures. Given that hazard perception latency has been linked with increased crash risk, these results are consistent with the idea that increased crash risk in older adults could be a function of poorer hazard perception, though this decline does not appear to manifest until age 75+ in healthy drivers.
Resumo:
This paper seeks to identify the sources of value in a government health screening service. Consumers' use of such services for their won benefits demonstrates desirable behaviour and their continued use of these services indicates maintenance of the behaviour. There are also positive outcomes for society as the health of its members is improved overall through this behaviour. Individual-depth interview with 25 women who use breast cancer screening services provided by BreastScreen (BSQ) revealed five categories of sources of value. They are information sources, interaction sources, service, environment, and consumer participation. These findings provide valuable insights into the value construction of consumers and contribute towards our understanding of the value concept in social marketing.
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
The establishment of corporate objectives regarding economic, environmental, social, and ethical responsibilities, to inform business practice, has been gaining credibility in the business sector since the early 1990’s. This is witnessed through (i) the formation of international forums for sustainable and accountable development, (ii) the emergence of standards, systems, and frameworks to provide common ground for regulatory and corporate dialogue, and (iii) the significant quantum of relevant popular and academic literature in a diverse range of disciplines. How then has this move towards greater corporate responsibility become evident in the provision of major urban infrastructure projects? The gap identified, in both academic literature and industry practice, is a structured and auditable link between corporate intent and project outcomes. Limited literature has been discovered which makes a link between corporate responsibility; project performance indicators (or critical success factors) and major infrastructure provision. This search revealed that a comprehensive mapping framework, from an organisation’s corporate objectives through to intended, anticipated and actual outcomes and impacts has not yet been developed for the delivery of such projects. The research problem thus explored is ‘the need to better identify, map and account for the outcomes, impacts and risks associated with economic, environmental, social and ethical outcomes and impacts which arise from major economic infrastructure projects, both now, and into the future’. The methodology being used to undertake this research is based on Checkland’s soft system methodology, engaging in action research on three collaborative case studies. A key outcome of this research is a value-mapping framework applicable to Australian public sector agencies. This is a decision-making methodology which will enable project teams responsible for delivering major projects, to better identify and align project objectives and impacts with stated corporate objectives.
Resumo:
We provide conceptual and empirical insights elucidating how organizational practices influence service staff attitudes and behaviors and how the latter set affects organizational performance drivers. Our analyses suggest that service organizations can enhance their performance by putting in place strategies and practices that strengthen the service-oriented behaviors of their employees and reduce their intentions to leave the organization. Improved performance is accomplished through both the delivery of high quality services (enhancing organizational effectiveness) and the maintenance of frontline staff(increasing organizational efficiency). Specifically, service-oriented business strategies in the form of organizational-level service orientation and practices in the form of training directly influence the manifest service-oriented behaviors of staff. Training also indirectly affects the intention of frontline staff to leave the organization; it increases job satisfaction, which, in turn has an impact on affective commitment. Both affective and instrumental commitment were hypothesized to reduce the intentions of frontline staff to leave the organization, however only affective commitment had a significant effect.
Resumo:
Problem: This study considers whether requiring learner drivers to complete a set number of hours while on a learner licence affects the amount of hours of supervised practice that they undertake. It compares the amount of practice that learners in Queensland and New South Wales report undertaking. At the time the study was conducted, learner drivers in New South Wales were required to complete 50 hours of supervised practice while those from Queensland were not. Method: Participants were approached outside driver licensing centres after they had just completed their practical driving test to obtain their provisional (intermediate) licence. Those agreeing to participate were interviewed over the phone later and asked a range of questions to obtain information including socio-demographic details and amount of supervised practice completed. Results: There was a significant difference in the amount of practice that learners reported undertaking. Participants from New South Wales reported completing a significantly greater amount of practice (M = 73.3 hours, sd = 29.12 hours) on their learner licence than those from Queensland (M = 64.1 hours, sd = 51.05 hours). However, the distribution of hours of practice among the Queensland participants was bimodal in nature. Participants from Queensland reported either completing much less or much more practice than the New South Wales average. Summary: While it appears that the requirement that learner drivers complete a set number of hours may increase the average amount of hours of practice obtained, it may also serve to discourage drivers from obtaining additional practice, over and above the required hours. Impact on Industry: The results of this study suggest that the implications of requiring learner drivers to complete a set number of hours of supervised practice are complex. In some cases, policy makers may inadvertently limit the amount of hours learners obtain to the mandated amount rather than encouraging them to obtain as much practice as possible.
Resumo:
This article rebuts the still-common assumption that managers of capitalist entities have a duty, principally or even exclusively, to maximise the monetary return to investors on their investments. It argues that this view is based on a misleadingly simplistic conception of human values and motivation. Not only is acting solely to maximise long-term shareholder value difficult, it displays, at best, banal single-mindedness and, at worst, sociopathy. In fact, real investors and managers have rich constellations of values that should be taken account of in all their decisions, including their business decisions. Awareness of our values, and public expression of our commitment to exemplify them, make for healthier investment and, in the long term, a healthier corporate world. Individuals and funds investing on the basis of such values, in companies that express their own, display humanity rather than pathology. Preamble I always enjoyed the discussions that Michael Whincop and I had about the interaction of ethics and economics. Each of us could see an important role for these disciplines, as well as our common discipline of law. We also shared an appreciation of the institutional context within which much of the drama of life is played out. In understanding the behaviour of individuals and the choices they make, it seemed axiomatic to each of us that ethics and economics have a lot to say. This was also true of the institutions in which they operate. Michael ·had a strong interest in 'the new institutional economics' I and I had a strong interest in 'institutionalising ethics' right through the 1990s.' This formed the basis of some fascinating and fruitful discussions. Professor Charles Sampford is Director, Key Centre for Ethics, Law, Justice and Governance, Foundation Professor of Law at Griffith University and President, International Institute for Public Ethics.DrVirginia Berry is a Research Fellow at theKey Centre for Ethics, Law,Justice andGovernance, Griffith University. Oliver Williamson, one of the leading proponents of the 'new institutional economics', published a number of influential works - see Williamson (1975, 1995,1996). Sampford (1991),' pp 185-222. The primary focus of discussions on institutionalising ethics has been in public sectorethics: see, for example, Preston and Sampford (2002); Sampford (1994), pp 114-38. Some discussion has, however, moved beyond the public sector to include business - see Sampford 200408299
Resumo:
This submission has been prepared in response to the Parliamentary Travelsafe Committee's Inquiry into vehicle impoundment for drink drivers to address research relevant to the committee’s investigation into whether: • Drink drivers in Queensland continue to drive illegally after being apprehended by police or disqualified from driving by the courts; • The incidence of repeat drink driving undermines the effectiveness of existing penalties for drink driving offences; and • Vehicle impoundment and/or ignition key confiscation are cost-effective deterrents that will reduce drink driving recidivism, relating to other existing or potential methods of managing offenders.
Resumo:
Today more than ever, generating and managing knowledge is an essential source of competitive advantage for every organization, and particularly for Multinational corporations (MNC). However, despite the undisputed agreement about the importance of creating and managing knowledge, there are still a large number of corporations that act unethically or illegally. Clearly, there is a lack of attention in gaining more knowledge about the management of ethical knowledge in organizations. This paper refers to value-based knowledge, as the process of recognise and manage those values that stand at the heart of decision-making and action in organizations. In order to support MNCs in implementing value-based knowledge process, the managerial ethical profile (MEP) has been presented as a valuable tool to facilitate knowledge management process at both the intra-organizational network level and at the inter-organizational network level.
Resumo:
Student understanding of decimal number is poor (e.g., Baturo, 1998; Behr, Harel, Post & Lesh, 1992). This paper reports on a study which set out to determine the cognitive complexities inherent in decimal-number numeration and what teaching experiences need to be provided in order to facilitate an understanding of decimal-number numeration. The study gave rise to a theoretical model which incorporated three levels of knowledge. Interview tasks were developed from the model to probe 45 students’ understanding of these levels, and intervention episodes undertaken to help students construct the baseline knowledge of position and order (Level 1 knowledge) and an understanding of multiplicative structure (Level 3 knowledge). This paper describes the two interventions and reports on the results which suggest that helping students construct appropriate mental models is an efficient and effective teaching strategy.
Resumo:
Secondary tasks such as cell phone calls or interaction with automated speech dialog systems (SDSs) increase the driver’s cognitive load as well as the probability of driving errors. This study analyzes speech production variations due to cognitive load and emotional state of drivers in real driving conditions. Speech samples were acquired from 24 female and 17 male subjects (approximately 8.5 h of data) while talking to a co-driver and communicating with two automated call centers, with emotional states (neutral, negative) and the number of necessary SDS query repetitions also labeled. A consistent shift in a number of speech production parameters (pitch, first format center frequency, spectral center of gravity, spectral energy spread, and duration of voiced segments) was observed when comparing SDS interaction against co-driver interaction; further increases were observed when considering negative emotion segments and the number of requested SDS query repetitions. A mel frequency cepstral coefficient based Gaussian mixture classifier trained on 10 male and 10 female sessions provided 91% accuracy in the open test set task of distinguishing co-driver interactions from SDS interactions, suggesting—together with the acoustic analysis—that it is possible to monitor the level of driver distraction directly from their speech.
Resumo:
Purpose: To compare the eye and head movements and lane-keeping of drivers with hemianopia and quadrantanopia with that of age-matched controls when driving under real world conditions. Methods: Participants included 22 hemianopes and 8 quadrantanopes (M age 53 yrs) and 30 persons with normal visual fields (M age 52 yrs) who were ≥ 6 months from the brain injury date and either a current driver or aiming to resume driving. All participants drove an instrumented dual-brake vehicle along a 14-mile route in traffic that included non-interstate city driving and interstate driving. Driving performance was scored using a standardised assessment system by two “backseat” raters and the Vigil Vanguard system which provides objective measures of speed, braking and acceleration, cornering, and video-based footage from which eye and head movements and lane-keeping can be derived. Results: As compared to drivers with normal visual fields, drivers with hemianopia or quadrantanopia on average were significantly more likely to drive slower, to exhibit less excessive cornering forces or acceleration, and to execute more shoulder movements off the seat. Those hemianopic and quadrantanopic drivers rated as safe to drive by the backseat evaluator made significantly more excursive eye movements, exhibited more stable lane positioning, less sudden braking events and drove at higher speeds than those rated as unsafe, while there was no difference between safe and unsafe drivers in head movements. Conclusions: Persons with hemianopic and quadrantanopic field defects rated as safe to drive have different driving characteristics compared to those rated as unsafe when assessed using objective measures of driving performance.
Resumo:
Purpose – The paper describes a project created to enhance e-research support activities within an Australian university, based on environmental scanning of e-research activities and funding both nationally and internationally. Participation by the university library is also described.----- Design/methodology/approach – The paper uses a case study that describes the stages of a project undertaken to develop an academic library’s capacity to offer e-research support to its institution’s research community.----- Findings – While the outcomes of the project have been successfully achieved, the work needs to be continued and eventually mainstreamed as core business in order to keep pace with developments in e-research. The continual skilling up of the university’s researchers and research support staff in e-research activities is imperative in reaching the goal of becoming a highly competitive research institution.----- Research limitations/implications – Although a single case study, the work has been contextualised within the national research agenda.----- Practical implications – The paper provides a project model that can adapted within an academic library without external or specialist skills. It is also scalable and can be applied at a divisional or broader level.----- Originality/value – The paper highlights the drivers for research investment in Australia and provides a model of how building e-research support activities can leverage this investment and contribute towards successful research activity.
Resumo:
Timberland is seen as a long-term investment which has recently received increased institutional investor attention in many countries and potentially provides added value in a mixed-asset portfolio. Using the National Council of Real Estate Investment Fiduciaries (NCREIF) timberland series, this paper analyses the risk-adjusted performance and portfolio diversification benefits of timberland in the United States over the period of 1987-2007. U.S. timberland is seen to have been a strongly performed asset class with significant portfolio diversification benefits over this period; with a significant portfolio role separate to that of real estate. However, recent years have seen reduced risk-adjusted returns, with some loss of portfolio diversification benefits of timberland with stocks and real estate. Global drivers are likely to see increased future demand for timberland investment.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.