987 resultados para optimum proj ectile
Resumo:
Embora o objectivo de redução de acidentes laborais seja frequentemente invocado para justificar uma aplicação preventiva de testes de álcool e drogas no trabalho, há poucas evidências estatisticamente relevantes do pressuposto nexo de causalidade e correlação negativa entre a sujeição aos testes e os posteriores acidentes. Os dados dos testes e dos acidentes ocorridos com os colaboradores de uma empresa transportadora portuguesa, durante anos recentes, são explorados, em busca de relações entre estas e outras variáveis biográficas. Os resultados preliminares obtidos sugerem que a sujeição a testes aleatórios no local de trabalho está associada a menos acidentes posteriores que os ocorridos na ausência desses testes, e que existe uma frequência óptima de testes acima da qual não se verifica redução de acidentes que justifique o investimento em aumento de testes. - Although the aim of reducing occupational accidents is frequently cited to justify preventive drug and alcohol testing at work, there is little statistically significant evidence of the assumed causality relationship and negative correlation between exposure to testing and subsequent accidents. Data mining of tests and accidents involving employees of a Portuguese transportation company, during recent years, searches for relations between these and other biographical variables. Preliminary results indicate that being subjected to random testing in the workplace is associated with fewer subsequent accidents that occur in the absence of such tests, and also that there is an optimum frequency of tests, above which there is no reduction of accidents to justify an increase of investment in testing.
Resumo:
This paper discusses a study to determine selection of hearing protective devices to ensure optimum speech discrimination.
Resumo:
This paper is a review of a study to determine optimum lighting conditions to facilitate lipreading.
Resumo:
This paper reviews a study to determine optimum hearing aid settings based on loudness.
Resumo:
There is a pressing need for good rainfall data for the African continent both for humanitarian and climatological purposes. Given the sparseness of ground-based observations, one source of rainfall information is Numerical Weather Prediction (NWP) model outputs. The aim of this article is to investigate the quality of two NWP products using Ethiopia as a test case. The two products evaluated are the ERA-40 and NCEP reanalysis rainfall products. Spatial, seasonal and interannual variability of rainfall have been evaluated for Kiremt (JJAS) and Belg (FMAM) seasons at a spatial scale that reflects the local variability of the rainfall climate using a method which makes optimum use of sparse gauge validation data. We found that the spatial pattern of the rainfall climatology is captured well by both models especially for the main rainy season Kiremt. However, both models tend to overestimate the mean rainfall in the northwest, west and central regions but underestimate in the south and east. The overestimation is greater for NCEP in Belg season and greater for ERA-40 in Kiremt Season. ERA-40 captures the annual cycle over most of the country better than NCEP, but strongly exaggerates the Kiremt peak in the northwest and west. The overestimation in Kiremt appears to have been reduced since the assimilation of satellite data increased around 1990. For both models the interannual variability is less well captured than the spatial and seasonal variability. Copyright © 2008 Royal Meteorological Society
Resumo:
Cascade is a multi-institution project studying the temporal and spatial organization of tropical convective systems. While cloud resolving numerical models can reproduce the observed diurnal cycle of such systems they are sensitive to the chosen resolution. As part of this effort, we are comparing results from the Met. Office Unified Model to data from the Global Earth Radiation Budget satellite instrument over the African Monsoon Interdisciplinary Analyses region of North Africa. We use a variety of mathematical techniques to study the outgoing radiation and the evolution of properties such as the cloud size distribution. The effectiveness of various model resolutions is tested with a view to determining the optimum balance between resolution and the need to reproduce the observations.
Resumo:
Aquatic sediments often remove hydrophobic contaminants from fresh waters. The subsequent distribution and concentration of contaminants in bed sediments determines their effect on benthic organisms and the risk of re-entry into the water and/or leaching to groundwater. This study examines the transport of simazine and lindane in aquatic bed sediments with the aim of understanding the processes that determine their depth distribution. Experiments in flume channels (water flow of 10 cm s(-1)) determined the persistence of the compounds in the absence of sediment with (a) de-ionised water and (b) a solution that had been in contact with river sediment. In further experiments with river bed sediments in light and dark conditions, measurements were made of the concentration of the compounds in the overlying water and the development of bacterial/algal biofilms and bioturbation activity. At the end of the experiments, concentrations in sediments and associated pore waters were determined in sections of the sediment at 1 mm resolution down to 5 mm and then at 10 mm resolution to 50 mm depth and these distributions analysed using a sorption-diffusion-degradation model. The fine resolution in the depth profile permitted the detection of a maximum in the concentration of the compounds in the pore water near the surface, whereas concentrations in the sediment increased to a maximum at the surface itself. Experimental distribution coefficients determined from the pore water and sediment concentrations indicated a gradient with depth that was partly explained by an increase in organic matter content and specific surface area of the solids near the interface. The modelling showed that degradation of lindane within the sediment was necessary to explain the concentration profiles, with the optimum agreement between the measured and theoretical profiles obtained with differential degradation in the oxic and anoxic zones. The compounds penetrated to a depth of 40-50 rum over a period of 42 days. (C) 2004 Society of Chemical Industry.
Resumo:
As part of the European Commission (EC)'s revision of the Sewage Sludge Directive and the development of a Biowaste Directive, there was recognition of the difficulty of comparing data from Member States (MSs) because of differences in sampling and analytical procedures. The 'HORIZONTAL' initiative, funded by the EC and MSs, seeks to address these differences in approach and to produce standardised procedures in the form of CEN standards. This article is a preliminary investigation into aspects of the sampling of biosolids, composts and soils to which there is a history of biosolid application. The article provides information on the measurement uncertainty associated with sampling from heaps, large bags and pipes and soils in the landscape under a limited set of conditions, using sampling approaches in space and time and sample numbers based on procedures widely used in the relevant industries and when sampling similar materials. These preliminary results suggest that considerably more information is required before the appropriate sample design, optimum number of samples, number of samples comprising a composite, and temporal and spatial frequency of sampling might be recommended to achieve consistent results of a high level of precision and confidence. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
There are now considerable expectations that semi-distributed models are useful tools for supporting catchment water quality management. However, insufficient attention has been given to evaluating the uncertainties inherent to this type of model, especially those associated with the spatial disaggregation of the catchment. The Integrated Nitrogen in Catchments model (INCA) is subjected to an extensive regionalised sensitivity analysis in application to the River Kennet, part of the groundwater-dominated upper Thames catchment, UK The main results are: (1) model output was generally insensitive to land-phase parameters, very sensitive to groundwater parameters, including initial conditions, and significantly sensitive to in-river parameters; (2) INCA was able to produce good fits simultaneously to the available flow, nitrate and ammonium in-river data sets; (3) representing parameters as heterogeneous over the catchment (206 calibrated parameters) rather than homogeneous (24 calibrated parameters) produced a significant improvement in fit to nitrate but no significant improvement to flow and caused a deterioration in ammonium performance; (4) the analysis indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters (as opposed to calibrating all parameters together) was not a sensible strategy in this case; (5) even the parameters to which the model output was most sensitive suffered from high uncertainty due to spatial inconsistencies in the estimated optimum values, parameter equifinality and the sampling error associated with the calibration method; (6) soil and groundwater nutrient and flow data are needed to reduce. uncertainty in initial conditions, residence times and nitrogen transformation parameters, and long-term historic data are needed so that key responses to changes in land-use management can be assimilated. The results indicate the general, difficulty of reconciling the questions which catchment nutrient models are expected to answer with typically limited data sets and limited knowledge about suitable model structures. The results demonstrate the importance of analysing semi-distributed model uncertainties prior to model application, and illustrate the value and limitations of using Monte Carlo-based methods for doing so. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Two fundamental perspectives on the dynamics of midlatitude weather systems are provided by potential vorticity (PV) and the omega equation. The aim of this paper is to investigate the link between the two perspectives, which has so far received very little attention in the meteorological literature. It also aims to give a quantitative basis for discussion of quasi-geostrophic vertical motion in terms of components associated with system movement, maintaining a constant thermal structure, and with the development of that structure. The former links with the isentropic relative-flow analysis technique. Viewed in a moving frame of reference, the measured development of a system depends on the velocity of that frame of reference. The requirement that the development should be a minimum provides a quantitative method for determining the optimum system velocity. The component of vertical velocity associated with development is shown to satisfy an omega equation with forcing determined from the relative advection of interior PV and boundary temperature. The analysis carries through in the presence of diabatic heating provided the omega equation forcing is based on the interior PV and boundary thermal tendencies, including the heating effect. The analysis is shown to be possible also at the level of the semi-geostrophic approximation. The analysis technique is applied to a number of idealized problems that can be considered to be building blocks for midlatitude synoptic-scale dynamics. They focus on the influences of interior PV, boundary temperature, an interior boundary, baroclinic instability associated with two boundaries, and also diabatic heating. In each case, insights yielded by the new perspective are sought into the dynamical behaviour, especially that related to vertical motion. Copyright © 2003 Royal Meteorological Society
Resumo:
A set of filters based on the sequence of semiconductor edges is described which offers continuity of short-wave infrared blocking. The rejection throughout the stop region is greater than 103 for each filter and the transmission better than 70% through one octave with a square cutoff. The cutoff points are located at intervals of about two-thirds of an octave. Filters at 2.6 ,µm, 5.5 µm, and 12 µm which use a low-passing multilayer in combination with a semiconductor absorption edge are described in detail. The design of multilayers for optimum performance is discussed by analogy with the synthesis of electric circuit filters.
Resumo:
Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
Discrepancies between recent global earth albedo anomaly data obtained from the climate models, space and ground observations call for a new and better earth reflectance measurement technique. The SALEX (Space Ashen Light Explorer) instrument is a space-based visible and IR instrument for precise estimation of the global earth albedo by measuring the ashen light reflected off the shadowy side of the Moon from the low earth orbit. The instrument consists of a conventional 2-mirror telescope, a pair of a 3-mirror visible imager and an IR bolometer. The performance of this unique multi-channel optical system is sensitive to the stray light contamination due to the complex optical train incorporating several reflecting and refracting elements, associated mounts and the payload mechanical enclosure. This could be further aggravated by the very bright and extended observation target (i.e. the Moon). In this paper, we report the details of extensive stray light analysis including ghosts and cross-talks, leading to the optimum set of stray light precautions for the highest signal-to-noise ratio attainable.