368 resultados para sector-motor


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the elements which support innovative and entrepreneurial activity in New Zealand’s state owned enterprises (SOEs). An inductive case study design, involving interview data, textual analysis, and observation, was applied to three SOEs. Findings reveal that those aspects typically associated with entrepreneurship, such as innovation, risk acceptance, pro-activeness and growth, are often supported by a number of unexpected elements within the public sector. These elements include culture, branding, operational excellence, cost efficiency, and knowledge transfer. The implications are twofold. First, that innovative and entrepreneurial activity in the public sector can go beyond policy-making, with SOEs representing an important policy decision and sector of the New Zealand Government. And second, that the impact of several SOEs on international markets suggests competition on the global stage will increasingly come from both public and private sector organizations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Construction sector application of Lead Indicators generally and Positive Performance Indicators (PPIs) particularly, are largely seen by the sector as not providing generalizable indicators of safety effectiveness. Similarly, safety culture is often cited as an essential factor in improving safety performance, yet there is no known reliable way of measuring safety culture. This paper proposes that the accurate measurement of safety effectiveness and safety culture is a requirement for assessing safe behaviours, safety knowledge, effective communication and safety performance. Currently there are no standard national or international safety effectiveness indicators (SEIs) that are accepted by the construction industry. The challenge is that quantitative survey instruments developed for measuring safety culture and/ or safety climate are inherently flawed methodologically and do not produce reliable and representative data concerning attitudes to safety. Measures that combine quantitative and qualitative components are needed to provide a clear utility for safety effectiveness indicators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Value Management (VM) has been proven to provide a structured framework, together with supporting tools and techniques that facilitate effective decision-making in many types of projects, thus achieving ‘best value’ for clients. It is identified at International level as a natural career progression for the construction service provider and as an opportunity in developing leading-edge skills. The services offered by contractors and consultants in the construction sector have been expanding. In an increasingly competitive and global marketplace, firms are seeking ways to differentiate their services to ever more knowledgeable and demanding clients. The traditional demarcations have given way, and the old definition of what contractors, designers, engineers and quantity surveyors can, and cannot do in terms of their market offering has changed. Project management, design and cost and safety consultancy services, are being delivered by a diverse range of suppliers. Value management services have been developing in various sectors in industry; from manufacturing to the military and now construction. Given the growing evidence that VM has been successful in delivering value-for-money to the client, VM would appear to be gaining some momentum as an essential management tool in the Malaysian construction sector. The recently issued VM Circular 3/2009 by the Economic Planning Unit Malaysia (EPU) possibly marks a new beginning in public sector client acceptance on the strength of VM in construction. This paper therefore attempts to study the prospects of marketing the benefits of VM by construction service providers, and how it may provide an edge in an increasingly competitive Malaysian construction industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When asking the question, ``How can institutions design science policies for the benefit of decision makers?'' Sarewitz and Pielke Sarewitz, D., Pielke Jr., R.A., this issue. The neglected heart of science policy: reconciling supply of and demand for science. Environ. Sci. Policy 10] posit the idea of ``reconciling supply and demand of science'' as a conceptual tool for assessment of science programs. We apply the concept to the U.S. Department of Agriculture's (USDA) carbon cycle science program. By evaluating the information needs of decision makers, or the ``demand'', along with the supply of information by the USDA, we can ascertain where matches between supply and demand exist, and where science policies might miss opportunities. We report the results of contextual mapping and of interviews with scientists at the USDA to evaluate the production and use of current agricultural global change research, which has the stated goal of providing ``optimal benefit'' to decision makers on all levels. We conclude that the USDA possesses formal and informal mechanisms by which scientists evaluate the needs of users, ranging from individual producers to Congress and the President. National-level demands for carbon cycle science evolve as national and international policies are explored. Current carbon cycle science is largely derived from those discussions and thus anticipates the information needs of producers. However, without firm agricultural carbon policies, such information is currently unimportant to producers. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros