934 resultados para 1 sigma standard deviation for the average


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent years many problems are emerging due to the aircraft noise on the airport surrounding areas. The solution to this problem is not easy considering that the neighbourhood asks for the reduction of the number of aircraft operations and the airlines ask for a growing demand in the number of operations in the major airports. So the airport and regulatory authorities try to get a solution imposing a fine to the aircraft which its actual trajectory differs from the nominal one more than a lateral deviation. But, which is the value of this deviation?. The current situation is that many operators have to pay a lot of money for exceeding a deviation which has been established without operational criteria. This paper presents the results of a research program which is being carried out by the authors which aims to determine the "delta" deviation to be used for this purpose. In addition it is proposed a customized method per SID and per airport to be used for determining the maximum allowed lateral deviation by which if the aircraft is within it, then none fine will be imposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photogrammetric reanalysis of 1985 aerial photos has revealed substantial submarine melting of the floating ice tongue of Jakobshavn Isbrae, west Greenland. The thickness of the floating tongue determined from hydrostatic equilibrium tapers from ~940 m near the grounding zone to ~600 m near the terminus. Feature tracking on orthophotos shows speeds on the July 1985 ice tongue to be nearly constant (~18.5 m/d), indicating negligible dynamic thinning. The thinning of the ice tongue is mostly due to submarine melting with average rates of 228 ± 49 m/yr (0.62 ± 0.13 m/d) between the summers of 1984 and 1985. The cause of the high melt rate is the circulation of warm seawater (thermal forcing of up to 4.2°C) beneath the tongue with convection driven by the substantial discharge of subglacial freshwater from the grounding zone. We believe that this buoyancy-driven convection is responsible for a deep channel incised into the sole of the floating tongue. A dramatic thinning, retreat, and speedup began in 1998 and continues today. The timing of the change is coincident with a 1.1°C warming of deep ocean waters entering the fjord after 1997. Assuming a linear relationship between thermal forcing and submarine melt rate, average melt rates should have increased by ~25% (~57 m/yr), sufficient to destabilize the ice tongue and initiate the ice thinning and the retreat that followed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To determine the test-retest reliability of measurements of thickness, fascicle length (Lf) and pennation angle (θ) of the vastus lateralis (VL) and gastrocnemius medialis (GM) muscles in older adults. Participants Twenty-one healthy older adults (11 men and ten women; average age 68·1 ± 5·2 years) participated in this study. Methods Ultrasound images (probe frequency 10 MHz) of the VL at two sites (VL site 1 and 2) were obtained with participants seated with knee at 90º flexion. For GM measures, participants lay prone with ankle fixed at 15º dorsiflexion. Measures were taken on two separate occasions, 7 days apart (T1 and T2). Results The ICCs (95% CI) were: VL site 1 thickness = 0·96(0·90–0·98); VL site 2 thickness = 0·96(0·90–0·98), VL θ = 0·87(0·68–0·95), VL Lf = 0·80(0·50–0·92), GM thickness = 0·97(0·92–0·99), GM θ = 0·85(0·62–0·94) and GM Lf =0·90(0·75–0·96). The 95% ratio limits of agreement (LOAs) for all measures, calculated by multiplying the standard deviation of the ratio of the results between T1 and T2 by 1·96, ranged from 10·59 to 38·01%. Conclusion The ability of these tests to determine a real change in VL and GM muscle architecture is good on a group level but problematic on an individual level as the relatively large 95% ratio LOAs in the current study may encompass the changes in architecture observed in other training studies. Therefore, the current findings suggest that B-mode ultrasonography can be used with confidence by researchers when investigating changes in muscle architecture in groups of older adults, but its use is limited in showing changes in individuals over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a theory is developed to calculate the average strain field in the materials with randomly distributed inclusions. Many previous researches investigating the average field behaviors were based upon Mori and Tanaka's idea. Since they were restricted to studying those materials with uniform distributions of inclusions they did not need detailed statistical information of random microstructures, and could use the volume average to replace the ensemble average. To study more general materials with randomly distributed inclusions, the number density function is introduced in formulating the average field equation in this research. Both uniform and nonuniform distributions of inclusions are taken into account in detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This technical memorandum documents the design, implementation, data preparation, and descriptive results for the 2006 Annual Economic Survey of Federal Gulf Shrimp Permit Holders. The data collection was designed by the NOAA Fisheries Southeast Fisheries Science Center Social Science Research Group to track the financial and economic status and performance by vessels holding a federal moratorium permit for harvesting shrimp in the Gulf of Mexico. A two page, self-administered mail survey collected total annual costs broken out into seven categories and auxiliary economic data. In May 2007, 580 vessels were randomly selected, stratified by state, from a preliminary population of 1,709 vessels with federal permits to shrimp in offshore waters of the Gulf of Mexico. The survey was implemented during the rest of 2007. After many reminder and verification phone calls, 509 surveys were deemed complete, for an ineligibility-adjusted response rate of 90.7%. The linking of each individual vessel’s cost data to its revenue data from a different data collection was imperfect, and hence the final number of observations used in the analyses is 484. Based on various measures and tests of validity throughout the technical memorandum, the quality of the data is high. The results are presented in a standardized table format, linking vessel characteristics and operations to simple balance sheet, cash flow, and income statements. In the text, results are discussed for the total fleet, the Gulf shrimp fleet, the active Gulf shrimp fleet, and the inactive Gulf shrimp fleet. Additional results for shrimp vessels grouped by state, by vessel characteristics, by landings volume, and by ownership structure are available in the appendices. The general conclusion of this report is that the financial and economic situation is bleak for the average vessels in most of the categories that were evaluated. With few exceptions, cash flow for the average vessel is positive while the net revenue from operations and the “profit” are negative. With negative net revenue from operations, the economic return for average shrimp vessels is less than zero. Only with the help of government payments does the average owner just about break even. In the short-term, this will discourage any new investments in the industry. The financial situation in 2006, especially if it endures over multiple years, also is economically unsustainable for the average established business. Vessels in the active and inactive Gulf shrimp fleet are, on average, 69 feet long, weigh 105 gross tons, are powered by 505 hp motor(s), and are 23 years old. Three-quarters of the vessels have steel hulls and 59% use a freezer for refrigeration. The average market value of these vessels was $175,149 in 2006, about a hundred-thousand dollars less than the average original purchase price. The outstanding loans averaged $91,955, leading to an average owner equity of $83,194. Based on the sample, 85% of the federally permitted Gulf shrimp fleet was actively shrimping in 2006. Of these 386 active Gulf shrimp vessels, just under half (46%) were owner-operated. On average, these vessels burned 52,931 gallons of fuel, landed 101,268 pounds of shrimp, and received $2.47 per pound of shrimp. Non-shrimp landings added less than 1% to cash flow, indicating that the federal Gulf shrimp fishery is very specialized. The average total cash outflow was $243,415 of which $108,775 was due to fuel expenses alone. The expenses for hired crew and captains were on average $54,866 which indicates the importance of the industry as a source of wage income. The resulting average net cash flow is $16,225 but has a large standard deviation. For the population of active Gulf shrimp vessels we can state with 95% certainty that the average net cash flow was between $9,500 and $23,000 in 2006. The median net cash flow was $11,843. Based on the income statement for active Gulf shrimp vessels, the average fixed costs accounted for just under a quarter of operating expenses (23.1%), labor costs for just over a quarter (25.3%), and the non-labor variable costs for just over half (51.6%). The fuel costs alone accounted for 42.9% of total operating expenses in 2006. It should be noted that the labor cost category in the income statement includes both the actual cash payments to hired labor and an estimate of the opportunity cost of owner-operators’ time spent as captain. The average labor contribution (as captain) of an owner-operator is estimated at about $19,800. The average net revenue from operations is negative $7,429, and is statistically different and less than zero in spite of a large standard deviation. The economic return to Gulf shrimping is negative 4%. Including non-operating activities, foremost an average government payment of $13,662, leads to an average loss before taxes of $907 for the vessel owners. The confidence interval of this value straddles zero, so we cannot reject, with 95% certainty, that the population average is zero. The average inactive Gulf shrimp vessel is generally of a smaller scale than the average active vessel. Inactive vessels are physically smaller, are valued much lower, and are less dependent on loans. Fixed costs account for nearly three quarters of the total operating expenses of $11,926, and only 6% of these vessels have hull insurance. With an average net cash flow of negative $7,537, the inactive Gulf shrimp fleet has a major liquidity problem. On average, net revenue from operations is negative $11,396, which amounts to a negative 15% economic return, and owners lose $9,381 on their vessels before taxes. To sustain such losses and especially to survive the negative cash flow, many of the owners must be subsidizing their shrimp vessels with the help of other income or wealth sources or are drawing down their equity. Active Gulf shrimp vessels in all states but Texas exhibited negative returns. The Alabama and Mississippi fleets have the highest assets (vessel values), on average, yet they generate zero cash flow and negative $32,224 net revenue from operations. Due to their high (loan) leverage ratio the negative 11% economic return is amplified into a negative 21% return on equity. In contrast, for Texas vessels, which actually have the highest leverage ratio among the states, a 1% economic return is amplified into a 13% return on equity. From a financial perspective, the average Florida and Louisiana vessels conform roughly to the overall average of the active Gulf shrimp fleet. It should be noted that these results are averages and hence hide the variation that clearly exists within all fleets and all categories. Although the financial situation for the average vessel is bleak, some vessels are profitable. (PDF contains 101 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imagery and concreteness norms and percentage noun usage were obtained on the 1,080 verbal items from the Toronto Word Pool. Imagery was defined as the rated ease with which a word aroused a mental image, and concreteness was defined in relation to level of abstraction. The degree to which a word was functionally a noun was estimated in a sentence generation task. The mean and standard deviation of the imagery and concreteness ratings for each item are reported together with letter and printed frequency counts for the words and indications of sex differences in the ratings. Additional data in the norms include a grammatical function code derived from dictionary definitions, a percent noun judgment, indexes of statistical approximation to English, and an orthographic neighbor ratio. Validity estimates for the imagery and concreteness ratings are derived from comparisons with scale values drawn from the Paivio, Yuille, and Madigan (1968) noun pool and the Toglia and Battig (1978) norms. © 1982 Psychonomic Society, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Individual identification via DNA profiling is important in molecular ecology, particularly in the case of noninvasive sampling. A key quantity in determining the number of loci required is the probability of identity (PIave), the probability of observing two copies of any profile in the population. Previously this has been calculated assuming no inbreeding or population structure. Here we introduce formulae that account for these factors, whilst also accounting for relatedness structure in the population. These formulae are implemented in API-CALC 1.0, which calculates PIave for either a specified value, or a range of values, for F-IS and F-ST.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We bridge the properties of the regular triangular, square, and hexagonal honeycomb Voronoi tessellations of the plane to the Poisson-Voronoi case, thus analyzing in a common framework symmetry breaking processes and the approach to uniform random distributions of tessellation-generating points. We resort to ensemble simulations of tessellations generated by points whose regular positions are perturbed through a Gaussian noise, whose variance is given by the parameter α2 times the square of the inverse of the average density of points. We analyze the number of sides, the area, and the perimeter of the Voronoi cells. For all valuesα >0, hexagons constitute the most common class of cells, and 2-parameter gamma distributions provide an efficient description of the statistical properties of the analyzed geometrical characteristics. The introduction of noise destroys the triangular and square tessellations, which are structurally unstable, as their topological properties are discontinuous in α = 0. On the contrary, the honeycomb hexagonal tessellation is topologically stable and, experimentally, all Voronoi cells are hexagonal for small but finite noise withα <0.12. For all tessellations and for small values of α, we observe a linear dependence on α of the ensemble mean of the standard deviation of the area and perimeter of the cells. Already for a moderate amount of Gaussian noise (α >0.5), memory of the specific initial unperturbed state is lost, because the statistical properties of the three perturbed regular tessellations are indistinguishable. When α >2, results converge to those of Poisson-Voronoi tessellations. The geometrical properties of n-sided cells change with α until the Poisson- Voronoi limit is reached for α > 2; in this limit the Desch law for perimeters is shown to be not valid and a square root dependence on n is established. This law allows for an easy link to the Lewis law for areas and agrees with exact asymptotic results. Finally, for α >1, the ensemble mean of the cells area and perimeter restricted to the hexagonal cells agree remarkably well with the full ensemble mean; this reinforces the idea that hexagons, beyond their ubiquitous numerical prominence, can be interpreted as typical polygons in 2D Voronoi tessellations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an assessment of the impacts of climate change on a series of indicators of hydrological regimes across the global domain, using a global hydrological model run with climate scenarios constructed using pattern-scaling from 21 CMIP3 (Coupled Model Intercomparison Project Phase 3) climate models. Changes are compared with natural variability, with a significant change being defined as greater than the standard deviation of the hydrological indicator in the absence of climate change. Under an SRES (Special Report on Emissions Scenarios) A1b emissions scenario, substantial proportions of the land surface (excluding Greenland and Antarctica) would experience significant changes in hydrological behaviour by 2050; under one climate model scenario (Hadley Centre HadCM3), average annual runoff increases significantly over 47% of the land surface and decreases over 36%; only 17% therefore sees no significant change. There is considerable variability between regions, depending largely on projected changes in precipitation. Uncertainty in projected river flow regimes is dominated by variation in the spatial patterns of climate change between climate models (hydrological model uncertainty is not included). There is, however, a strong degree of consistency in the overall magnitude and direction of change. More than two-thirds of climate models project a significant increase in average annual runoff across almost a quarter of the land surface, and a significant decrease over 14%, with considerably higher degrees of consistency in some regions. Most climate models project increases in runoff in Canada and high-latitude eastern Europe and Siberia, and decreases in runoff in central Europe, around the Mediterranean, the Mashriq, central America and Brasil. There is some evidence that projecte change in runoff at the regional scale is not linear with change in global average temperature change. The effects of uncertainty in the rate of future emissions is relatively small

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds) and CO. When these ozone changes are used to calculate radiative forcing (RF) (and climate metrics such as the global warming potential (GWP) and global temperature-change potential (GTP)) there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane) concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia). We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field) are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3 times larger using the ensemble-mean fields than using the individual models to calculate the RF. The source of this effect is largely due to the construction of the input ozone fields, which overestimate the true ensemble spread. Hence, while the average of multi-model fields are normally appropriate for calculating mean RF, GWP and GTP, they are not a reliable method for calculating the uncertainty in these fields, and in general overestimate the uncertainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ionospheric scintillations are caused by time-varying electron density irregularities in the ionosphere, occurring more often at equatorial and high latitudes. This paper focuses exclusively on experiments undertaken in Europe, at geographic latitudes between similar to 50 degrees N and similar to 80 degrees N, where a network of GPS receivers capable of monitoring Total Electron Content and ionospheric scintillation parameters was deployed. The widely used ionospheric scintillation indices S4 and sigma(phi) represent a practical measure of the intensity of amplitude and phase scintillation affecting GNSS receivers. However, they do not provide sufficient information regarding the actual tracking errors that degrade GNSS receiver performance. Suitable receiver tracking models, sensitive to ionospheric scintillation, allow the computation of the variance of the output error of the receiver PLL (Phase Locked Loop) and DLL (Delay Locked Loop), which expresses the quality of the range measurements used by the receiver to calculate user position. The ability of such models of incorporating phase and amplitude scintillation effects into the variance of these tracking errors underpins our proposed method of applying relative weights to measurements from different satellites. That gives the least squares stochastic model used for position computation a more realistic representation, vis-a-vis the otherwise 'equal weights' model. For pseudorange processing, relative weights were computed, so that a 'scintillation-mitigated' solution could be performed and compared to the (non-mitigated) 'equal weights' solution. An improvement between 17 and 38% in height accuracy was achieved when an epoch by epoch differential solution was computed over baselines ranging from 1 to 750 km. The method was then compared with alternative approaches that can be used to improve the least squares stochastic model such as weighting according to satellite elevation angle and by the inverse of the square of the standard deviation of the code/carrier divergence (sigma CCDiv). The influence of multipath effects on the proposed mitigation approach is also discussed. With the use of high rate scintillation data in addition to the scintillation indices a carrier phase based mitigated solution was also implemented and compared with the conventional solution. During a period of occurrence of high phase scintillation it was observed that problems related to ambiguity resolution can be reduced by the use of the proposed mitigated solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for the direct determination of Ni in soft drinks by graphite furnace atomic absorption spectrometry using a transversely heated graphite atomizer (THGA), Zeeman-effect background corrector, and Co as the internal standard (IS) is proposed. Magnesium nitrate was used to stabilize both Ni and Co. All diluted samples (1+1) in 0.2% (v/v) HNO3 and reference solutions [5.0-50 mu g L-1 Ni in 0.2% (v/v) HNO3] were spiked with 50 mu g L-1 Co. For a 20-mu L sample dispensed into the atomizer, correlations between the ratio of absorbance of Ni to absorbance of Co and the analyte concentration were close to 0.9996. The relative standard deviation of the measurements varied from 0.5 to 3.4% and 1.0 to 7.0% (n=12) with and without IS, respectively. Recoveries within 98-104% for Ni spikes were obtained using IS. The characteristic mass was calculated as 43 pg Ni and the limit of detection was 1.4 mu g L-1. The accuracy of the method was checked for the direct determination of Ni in soft drinks and the results obtained with IS were better than those without IS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a search for the standard model Higgs boson in final states with a charged lepton (electron or muon), missing transverse energy, and two or three jets, at least one of which is identified as a b-quark jet. The search is primarily sensitive to WH→ νbb̄ production and uses data corresponding to 9.7fb -1 of integrated luminosity collected with the D0 detector at the Fermilab Tevatron pp̄ Collider at √s=1.96TeV. We observe agreement between the data and the expected background. For a Higgs boson mass of 125 GeV, we set a 95% C.L. upper limit on the production of a standard model Higgs boson of 5.2×σ SM, where σ SM is the standard model Higgs boson production cross section, while the expected limit is 4.7×σ SM. © 2012 American Physical Society.