103 resultados para C.H. Blomstrom Motor Company
Resumo:
- Safety psychology and workplace safety - Motivational and attitudinal components of safety - Psychological determinants of safety - Addressing risk-behaviour in safety - Case Study from Construction - Discussion and Questions
Resumo:
Long-term loss of soil C stocks under conventional tillage and accrual of soil C following adoption of no-tillage have been well documented. No-tillage use is spreading, but it is common to occasionally till within a no-till regime or to regularly alternate between till and no-till practices within a rotation of different crops. Short-term studies indicate that substantial amounts of C can be lost from the soil immediately following a tillage event, but there are few field studies that have investigated the impact of infrequent tillage on soil C stocks. How much of the C sequestered under no-tillage is likely to be lost if the soil is tilled? What are the longer-term impacts of continued infrequent no-tillage? If producers are to be compensated for sequestering C in soil following adoption of conservation tillage practices, the impacts of infrequent tillage need to be quantified. A few studies have examined the short-term impacts of tillage on soil C and several have investigated the impacts of adoption of continuous no-tillage. We present: (1) results from a modeling study carried out to address these questions more broadly than the published literature allows, (2) a review of the literature examining the short-term impacts of tillage on soil C, (3) a review of published studies on the physical impacts of tillage and (4) a synthesis of these components to assess how infrequent tillage impacts soil C stocks and how changes in tillage frequency could impact soil C stocks and C sequestration. Results indicate that soil C declines significantly following even one tillage event (1-11 % of soil C lost). Longer-term losses increase as frequency of tillage increases. Model analyses indicate that cultivating and ripping are less disruptive than moldboard plowing, and soil C for those treatments average just 6% less than continuous NT compared to 27% less for CT. Most (80%) of the soil C gains of NT can be realized with NT coupled with biannual cultivating or ripping. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The relationship between soil structure and the ability of soil to stabilize soil organic matter (SOM) is a key element in soil C dynamics that has either been overlooked or treated in a cursory fashion when developing SOM models. The purpose of this paper is to review current knowledge of SOM dynamics within the framework of a newly proposed soil C saturation concept. Initially, we distinguish SOM that is protected against decomposition by various mechanisms from that which is not protected from decomposition. Methods of quantification and characteristics of three SOM pools defined as protected are discussed. Soil organic matter can be: (1) physically stabilized, or protected from decomposition, through microaggregation, or (2) intimate association with silt and clay particles, and (3) can be biochemically stabilized through the formation of recalcitrant SOM compounds. In addition to behavior of each SOM pool, we discuss implications of changes in land management on processes by which SOM compounds undergo protection and release. The characteristics and responses to changes in land use or land management are described for the light fraction (LF) and particulate organic matter (POM). We defined the LF and POM not occluded within microaggregates (53-250 mum sized aggregates as unprotected. Our conclusions are illustrated in a new conceptual SOM model that differs from most SOM models in that the model state variables are measurable SOM pools. We suggest that physicochemical characteristics inherent to soils define the maximum protective capacity of these pools, which limits increases in SOM (i.e. C sequestration) with increased organic residue inputs.
Resumo:
Advances in safety research—trying to improve the collective understanding of motor vehicle crash causation—rests upon the pursuit of numerous lines of inquiry. The research community has focused on analytical methods development (negative binomial specifications, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might think of different lines of inquiry in terms of ‘low lying fruit’—areas of inquiry that might provide significant improvements in understanding crash causation. It is the contention of this research that omitted variable bias caused by the exclusion of important variables is an important line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant ability to better understand contributing factors to crashes. This study—believed to represent a unique contribution to the safety literature—develops and examines the role of a sizeable set of spatial variables in intersection crash occurrence. In addition to commonly considered traffic and geometric variables, examined spatial factors include local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools. The results indicate that inclusion of these factors results in significant improvement in model explanatory power, and the results also generally agree with expectation. The research illuminates the importance of spatial variables in safety research and also the negative consequences of their omissions.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
OBJECTIVE To examine the psychometric properties of a Chinese version of the Problem Areas In Diabetes (PAID-C) scale. RESEARCH DESIGN AND METHODS The reliability and validity of the PAID-C were evaluated in a convenience sample of 205 outpatients with type 2 diabetes. Confirmatory factor analysis, Bland-Altman analysis, and Spearman's correlations facilitated the psychometric evaluation. RESULTS Confirmatory factor analysis confirmed a one-factor structure of the PAID-C (χ2/df ratio = 1.894, goodness-of-fit index = 0.901, comparative fit index = 0.905, root mean square error of approximation = 0.066). The PAID-C was associated with A1C (rs = 0.15; P < 0.05) and diabetes self-care behaviors in general diet (rs = −0.17; P < 0.05) and exercise (rs = −0.17; P < 0.05). The 4-week test-retest reliability demonstrated satisfactory stability (rs = 0.83; P < 0.01). CONCLUSIONS The PAID-C is a reliable and valid measure to determine diabetes-related emotional distress in Chinese people with type 2 diabetes.
Resumo:
Caulfield, Harold William; p.131 Cowan, Alexander; p.164 Cowley, Ebenezer; p.164 East Talgai Station; p.193 Eaves, S.H.; p.193-194 Edgar, J.S.; p.196 Everist, Selwyn; p.206 Experimental Farms and Gardens; pp.207-208 Government Houses - Queensland; pp.267-268
Resumo:
Type unions, pointer variables and function pointers are a long standing source of subtle security bugs in C program code. Their use can lead to hard-to-diagnose crashes or exploitable vulnerabilities that allow an attacker to attain privileged access over classified data. This paper describes an automatable framework for detecting such weaknesses in C programs statically, where possible, and for generating assertions that will detect them dynamically, in other cases. Exclusively based on analysis of the source code, it identifies required assertions using a type inference system supported by a custom made symbol table. In our preliminary findings, our type system was able to infer the correct type of unions in different scopes, without manual code annotations or rewriting. Whenever an evaluation is not possible or is difficult to resolve, appropriate runtime assertions are formed and inserted into the source code. The approach is demonstrated via a prototype C analysis tool.
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
Measurements in the exhaust plume of a petrol-driven motor car showed that molecular cluster ions of both signs were present in approximately equal amounts. The emission rate increased sharply with engine speed while the charge symmetry remained unchanged. Measurements at the kerbside of nine motorways and five city roads showed that the mean total cluster ion concentration near city roads (603 cm-3) was about one-half of that near motorways (1211 cm-3) and about twice as high as that in the urban background (269 cm-3). Both positive and negative ion concentrations near a motorway showed a significant linear increase with traffic density (R2=0.3 at p<0.05) and correlated well with each other in real time (R2=0.87 at p<0.01). Heavy duty diesel vehicles comprised the main source of ions near busy roads. Measurements were conducted as a function of downwind distance from two motorways carrying around 120-150 vehicles per minute. Total traffic-related cluster ion concentrations decreased rapidly with distance, falling by one-half from the closest approach of 2m to 5m of the kerb. Measured concentrations decreased to background at about 15m from the kerb when the wind speed was 1.3 m s-1, this distance being greater at higher wind speed. The number and net charge concentrations of aerosol particles were also measured. Unlike particles that were carried downwind to distances of a few hundred metres, cluster ions emitted by motor vehicles were not present at more than a few tens of metres from the road.
Resumo:
In 1987 Landcorp was corporatised as a state-owned enterprise under New Zealand's public sector reforms and began operating as a collection of farms located throughout the country. Twenty years later, Landcorp had established a record of careful land management, productivity growth and solid financial returns, transforming from a fledgling company into one of the country's largest farmers. Landcorp was a major agribusiness with assets of more than $1.4 billion, built on a culture of continuous improvement and an innovative approach to business. The challenge going forward was to continue growth without increasing land ownership : cultivating ideas to grow in less conventional ways. This case study examines the operations, development and innovative approach to business undertaken by Landcorp Farming Limited, concentrating on the challenges faced by the company to maintain profits and growth, and its strategic direction for the future.
Resumo:
This article argues that Chinese traditional values do matter in Chinese corporate governance. The object is to report on the preliminary findings of a project supported by the General Research Fund in Hong Kong (HK). Thus far the survey results from HK respondents support the authors’ hypothesis. As such, traditional Chinese values should be on the agenda of the next round of company law reforms in China
Resumo:
Increases in atmospheric concentrations of the greenhouse gases (GHGs) carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) due to human activities have been linked to climate change. GHG emissions from land use change and agriculture have been identified as significant contributors to both Australia’s and the global GHG budget. This is expected to increase over the coming decades as rates of agriculture intensification and land use change accelerate to support population growth and food production. Limited data exists on CO2, CH4 and N2O trace gas fluxes from subtropical or tropical soils and land uses. To develop effective mitigation strategies a full global warming potential (GWP) accounting methodology is required that includes emissions of the three primary greenhouse gases. Mitigation strategies that focus on one gas only can inadvertently increase emissions of another. For this reason, detailed inventories of GHGs from soils and vegetation under individual land uses are urgently required for subtropical Australia. This study aimed to quantify GHG emissions over two consecutive years from three major land uses; a well-established, unfertilized subtropical grass-legume pasture, a 30 year (lychee) orchard and a remnant subtropical Gallery rainforest, all located near Mooloolah, Queensland. GHG fluxes were measured using a combination of high resolution automated sampling, coarser spatial manual sampling and laboratory incubations. Comparison between the land uses revealed that land use change can have a substantial impact on the GWP on a landscape long after the deforestation event. The conversion of rainforest to agricultural land resulted in as much as a 17 fold increase in GWP, from 251 kg CO2 eq. ha-1 yr-1 in the rainforest to 889 kg CO2 eq. ha-1 yr-1 in the pasture to 2538 kg CO2 eq. ha-1 yr-1 in the lychee plantation. This increase resulted from altered N cycling and a reduction in the aerobic capacity of the soil in the pasture and lychee systems, enhancing denitrification and nitrification events, and reducing atmospheric CH4 uptake in the soil. High infiltration, drainage and subsequent soil aeration under the rainforest limited N2O loss, as well as promoting CH4 uptake of 11.2 g CH4-C ha-1 day-1. This was among the highest reported for rainforest systems, indicating that aerated subtropical rainforests can act as substantial sink of CH4. Interannual climatic variation resulted in significantly higher N2O emission from the pasture during 2008 (5.7 g N2O-N ha day) compared to 2007 (3.9 g N2O-N ha day), despite receiving nearly 500 mm less rainfall. Nitrous oxide emissions from the pasture were highest during the summer months and were highly episodic, related more to the magnitude and distribution of rain events rather than soil moisture alone. Mean N2O emissions from the lychee plantation increased from an average of 4.0 g N2O-N ha-1 day-1, to 19.8 g N2O-N ha-1 day-1 following a split application of N fertilizer (560 kg N ha-1, equivalent to 1 kg N tree-1). The timing of the split application was found to be critical to N2O emissions, with over twice as much lost following an application in spring (emission factor (EF): 1.79%) compared to autumn (EF: 0.91%). This was attributed to the hot and moist climatic conditions and a reduction in plant N uptake during the spring creating conditions conducive to N2O loss. These findings demonstrate that land use change in subtropical Australia can be a significant source of GHGs. Moreover, the study shows that modifying the timing of fertilizer application can be an efficient way of reducing GHG emissions from subtropical horticulture.
Resumo:
For almost a decade before Hollywood existed, French firm Pathe towered over the early film industry with estimates of its share of all films sold around the world varying between 50-70%. Pathe was the first global entertainment company. This paper analyses its rise to market leadership by applying a theoretical framework drawn from the business literature on causes of industry dominance, which provides insights into how firms acquire and maintain market dominance and in this case the film industry. This paper uses evidence presented by film historians to argue that Pathe "fits" the expected theoretical model of a dominant firm because it had a marketing orientation, used an effective quality-based competitive strategy and possessed the six critical marketing capabilities that business research shows enable the best performing firms to consistently outperform rivals.