481 resultados para Statistically Weighted Regularities
Resumo:
Potential impacts of plantation forestry practices on soil organic carbon and Fe available to microorganisms were investigated in a subtropical coastal catchment. The impacts of harvesting or replanting were largely limited to the soil top layer (0–10 cm depth). The thirty-year-old Pinus plantation showed low soil moisture content (Wc) and relatively high levels of soil total organic carbon (TOC). Harvesting and replanting increased soil Wc but reduced TOC levels. Mean dissolved organic carbon (DOC) and microbial biomass carbon (MBC) increased in harvested or replanted soils, but such changes were not statistically significant (P > 0.05). Total dithionite-citrate and aqua regia-extractable Fe did not respond to forestry practices, but acid ammonium oxalate and pyrophosphate-extractable, bioavailable Fe decreased markedly after harvesting or replanting. Numbers of heterotrophic bacteria were significantly correlated with DOC levels (P < 0.05), whereas Fe-reducing bacteria and S-bacteria detected using laboratory cultivation techniques did not show strong correlation with either soil DOC or Fe content.
Resumo:
Knowledge of differences in the demographics of contact lens prescribing between nations, and changes over time, can assist (a) the contact lens industry in developing and promoting various product types in different world regions, and (b) practitioners in understanding their prescribing habits in an international context. Data that we have gathered from annual contact lens fitting surveys conducted in Australia, Canada, Japan, the Netherlands, Norway, the UK and the USA between 2000 and 2008 reveal an ageing demographic, with Japan being the most youthful. The majority of fits are to females, with statistically significant differences between nations, ranging from 62 per cent of fits in Norway to 68 per cent in Japan. The small overall decline in the proportion of new fits, and commensurate increase in refits, over the survey periodmay indicate a growing rate of conversion of lens wearers to more advanced lens types, such as silicone hydrogels. � 2009 British Contact Lens Association.
Resumo:
Landscape scale environmental gradients present variable spatial patterns and ecological processes caused by climate, topography and soil characteristics and, as such, offer candidate sites to study environmental change. Data are presented on the spatial pattern of dominant species, biomass, and carbon pools and the temporal pattern of fluxes across a transitional zone shifting from Great Basin Desert scrub, up through pinyon-juniper woodlands and into ponderosa pine forest and the ecotones between each vegetation type. The mean annual temperature (MAT) difference across the gradient is approximately 3 degrees C from bottom to top (MAT 8.5-5.5) and annual precipitation averages from 320 to 530 mm/yr, respectively. The stems of the dominant woody vegetation approach a random spatial pattern across the entire gradient, while the canopy cover shows a clustered pattern. The size of the clusters increases with elevation according to available soil moisture which in turn affects available nutrient resources. The total density of woody species declines with increasing soil moisture along the gl-adient, but total biomass increases. Belowground carbon and nutrient pools change from a heterogenous to a homogenous distribution on either side of the woodlands. Although temperature controls the: seasonal patterns of carbon efflux from the soils, soil moisture appears to be the primary driving variable, but response differs underneath the different dominant species, Similarly, decomposition of dominant litter occurs faster-at the cooler and more moist sites, but differs within sites due to litter quality of the different species. The spatial pattern of these communities provides information on the direction of future changes, The ecological processes that we documented are not statistically different in the ecotones as compared to the: adjoining communities, but are different at sites above the woodland than those below the woodland. We speculate that an increase in MAT will have a major impact on C pools and C sequestering and release processes in these semiarid landscapes. However, the impact will be primarily related to moisture availability rather than direct effects of an increase in temperature. (C) 1998 Elsevier Science B.V.
Resumo:
There is considerable public, political and professional debate about the need for additional hospital beds in Australia. However, there is no clarity in regard to the definition, meaning and significance of hospital bed counts. Relative to population, there has been a total decline in bed availability in Australia over the past 15 years of 14.6% (22.9% for public hospital beds). This decline is partly offset by reductions in length of stay and changes to models of care; however, the net effect is increased bed occupancy which has in turn resulted in system-wide congestion. Future bed capability needs to be better planned to meet growing demands while at the same time continuing trends for more efficient use. Future planning should be based in part on weighted bed capability matched to need.
Resumo:
This paper suggests an approach for finding an appropriate combination of various parameters for extracting texture features (e.g. choice of spectral band for extracting texture feature, size of the moving window, quantization level of the image, and choice of texture feature etc.) to be used in the classification process. Gray level co-occurrence matrix (GLCM) method has been used for extracting texture from remotely sensed satellite image. Results of the classification of an Indian urban environment using spatial property (texture), derived from spectral and multi-resolution wavelet decomposed images have also been reported. A multivariate data analysis technique called ‘conjoint analysis’ has been used in the study to analyze the relative importance of these parameters. Results indicate that the choice of texture feature and window size have higher relative importance in the classification process than quantization level or the choice of image band for extracting texture feature. In case of texture features derived using wavelet decomposed image, the parameter ‘decomposition level’ has almost equal relative importance as the size of moving window and the decomposition of images up to level one is sufficient and there is no need to go for further decomposition. It was also observed that the classification incorporating texture features improves the overall classification accuracy in a statistically significant manner in comparison to pure spectral classification.
Resumo:
Predicting safety on roadways is standard practice for road safety professionals and has a corresponding extensive literature. The majority of safety prediction models are estimated using roadway segment and intersection (microscale) data, while more recently efforts have been undertaken to predict safety at the planning level (macroscale). Safety prediction models typically include roadway, operations, and exposure variables—factors known to affect safety in fundamental ways. Environmental variables, in particular variables attempting to capture the effect of rain on road safety, are difficult to obtain and have rarely been considered. In the few cases weather variables have been included, historical averages rather than actual weather conditions during which crashes are observed have been used. Without the inclusion of weather related variables researchers have had difficulty explaining regional differences in the safety performance of various entities (e.g. intersections, road segments, highways, etc.) As part of the NCHRP 8-44 research effort, researchers developed PLANSAFE, or planning level safety prediction models. These models make use of socio-economic, demographic, and roadway variables for predicting planning level safety. Accounting for regional differences - similar to the experience for microscale safety models - has been problematic during the development of planning level safety prediction models. More specifically, without weather related variables there is an insufficient set of variables for explaining safety differences across regions and states. Furthermore, omitted variable bias resulting from excluding these important variables may adversely impact the coefficients of included variables, thus contributing to difficulty in model interpretation and accuracy. This paper summarizes the results of an effort to include weather related variables, particularly various measures of rainfall, into accident frequency prediction and the prediction of the frequency of fatal and/or injury degree of severity crash models. The purpose of the study was to determine whether these variables do in fact improve overall goodness of fit of the models, whether these variables may explain some or all of observed regional differences, and identifying the estimated effects of rainfall on safety. The models are based on Traffic Analysis Zone level datasets from Michigan, and Pima and Maricopa Counties in Arizona. Numerous rain-related variables were found to be statistically significant, selected rain related variables improved the overall goodness of fit, and inclusion of these variables reduced the portion of the model explained by the constant in the base models without weather variables. Rain tends to diminish safety, as expected, in fairly complex ways, depending on rain frequency and intensity.
Resumo:
Safety at roadway intersections is of significant interest to transportation professionals due to the large number of intersections in transportation networks, the complexity of traffic movements at these locations that leads to large numbers of conflicts, and the wide variety of geometric and operational features that define them. A variety of collision types including head-on, sideswipe, rear-end, and angle crashes occur at intersections. While intersection crash totals may not reveal a site deficiency, over exposure of a specific crash type may reveal otherwise undetected deficiencies. Thus, there is a need to be able to model the expected frequency of crashes by collision type at intersections to enable the detection of problems and the implementation of effective design strategies and countermeasures. Statistically, it is important to consider modeling collision type frequencies simultaneously to account for the possibility of common unobserved factors affecting crash frequencies across crash types. In this paper, a simultaneous equations model of crash frequencies by collision type is developed and presented using crash data for rural intersections in Georgia. The model estimation results support the notion of the presence of significant common unobserved factors across crash types, although the impact of these factors on parameter estimates is found to be rather modest.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
At least two important transportation planning activities rely on planning-level crash prediction models. One is motivated by the Transportation Equity Act for the 21st Century, which requires departments of transportation and metropolitan planning organizations to consider safety explicitly in the transportation planning process. The second could arise from a need for state agencies to establish incentive programs to reduce injuries and save lives. Both applications require a forecast of safety for a future period. Planning-level crash prediction models for the Tucson, Arizona, metropolitan region are presented to demonstrate the feasibility of such models. Data were separated into fatal, injury, and property-damage crashes. To accommodate overdispersion in the data, negative binomial regression models were applied. To accommodate the simultaneity of fatality and injury crash outcomes, simultaneous estimation of the models was conducted. All models produce crash forecasts at the traffic analysis zone level. Statistically significant (p-values < 0.05) and theoretically meaningful variables for the fatal crash model included population density, persons 17 years old or younger as a percentage of the total population, and intersection density. Significant variables for the injury and property-damage crash models were population density, number of employees, intersections density, percentage of miles of principal arterial, percentage of miles of minor arterials, and percentage of miles of urban collectors. Among several conclusions it is suggested that planning-level safety models are feasible and may play a role in future planning activities. However, caution must be exercised with such models.
Resumo:
Traffic conflicts at railway junctions are very conmon, particularly on congested rail lines. While safe passage through the junction is well maintained by the signalling and interlocking systems, minimising the delays imposed on the trains by assigning the right-of-way sequence sensibly is a bonus to the quality of service. A deterministic method has been adopted to resolve the conflict, with the objective of minimising the total weighted delay. However, the computational demand remains significant. The applications of different heuristic methods to tackle this problem are reviewed and explored, elaborating their feasibility in various aspects and comparing their relative merits for further studies. As most heuristic methods do not guarantee a global optimum, this study focuses on the trade-off between computation time and optimality of the resolution.
Resumo:
Conflict occurs when two or more trains approach the same junction within a specified time. Such conflicts result in delays. Current practices to assign the right of way at junctions achieve orderly and safe passage of the trains, but do not attempt to reduce the delays. A traffic controller developed in the paper assigns right of way to impose minimum total weighted delay on the trains. The traffic flow model and the optimisation technique used in this controller are described. Simulation studies of the performance of the controller are given.
Resumo:
Purpose: To investigate the influence of convergence on axial length and corneal topography in young adult subjects.---------- Methods: Fifteen emmetropic young adult subjects with normal binocular vision had axial length and corneal topography measured immediately before and after a 15-min period of base out (BO) prismatic spectacle lens wear. Two different magnitude prismatic spectacles were worn in turn (8 [DELTA] BO and 16 [DELTA] BO), and for both tasks, distance fixation was maintained for the duration of lens wear. Eight subjects returned on a separate day for further testing and had axial length measured before, during, and immediately after a 15-min convergence task.---------- Results: No significant change was found to occur in axial length either during or after the sustained convergence tasks (p > 0.6). Some small but significant changes in corneal topography were found to occur after sustained convergence. The most significant corneal change was observed after the 16 [DELTA] BO prism wear. The corneal refractive power spherocylinder power vector J0 was found to change by a small (mean change of 0.03 D after the 16 [DELTA] BO task) but statistically significant (p = 0.03) amount as a result of the convergence task (indicative of a reduction in with-the-rule corneal astigmatism after convergence). Corneal axial power was found to exhibit a significant flattening in superior regions. Conclusions: Axial length appears largely unchanged by a period of sustained convergence. However, small but significant changes occur in the topography of the cornea after convergence.
Resumo:
In this paper, we follow Jegadeesh and Titman's (1993, Journal of Finance) approach to examine 25 momentum/contrarian trading strategies using monthly stock returns in China for the period from 1994 to 2007. Our results suggest that there is no momentum profitability in any of the 25 strategies. In contrast, there is some evidence of reversal effects where the past winners become losers and past losers become winners afterward. The contrarian profit is statistically significant for the strategies using short formation and holding periods, especially for the formation periods of 1 to 3 months and the holding periods of 1 to 3 months. The contrarian strategies can generate about 12% per annum on average. Moreover, we follow Heston and Sadka (2008, Journal of Financial Economics) to investigate where there is any seasonal pattern in the cross-sectional variation of average stock returns in our momentum/contrarian strategies. There is no evidence of any seasonal pattern, and the results are robust to different formation and holding periods.
Resumo:
Aim: This paper is a report of a study conducted to determine the effectiveness of a community case management collaborative education intervention in terms of satisfaction, learning and performance among public health nurses. Background: Previous evaluation studies of case management continuing professional education often failed to demonstrate effectiveness across a range of outcomes and had methodological weaknesses such as small convenience samples and lack of control groups. Method: A cluster randomised controlled trial was conducted between September 2005 and February 2006. Ten health centre clusters (5 control, 5 intervention) recruited 163 public health nurses in Taiwan to the trial. After pre-tests for baseline measurements, public health nurses in intervention centres received an educational intervention of four half-day workshops. Post-tests for both groups were conducted after the intervention. Two-way repeated measures analysis of variance was performed to evaluate the effect of the intervention on target outcomes. Results: A total of 161 participants completed the pre- and post-intervention measurements. This was almost a 99% response rate. Results revealed that 97% of those in the experimental group were satisfied with the programme. There were statistically significant differences between the two groups in knowledge (p = 0.001), confidence in case management skills (p = 0.001), preparedness for case manager role activities (p = 0.001), self-reported frequency in using skills (p = 0.001), and role activities (p = 0.004). Conclusion: Collaboration between academic and clinical nurses is an effective strategy to prepare nurses for rapidly-changing roles.
Resumo:
In this paper, a rate-based flow control scheme based upon per-VC virtual queuing is proposed for the Available Bit Rate (ABR) service in ATM. In this scheme, each VC in a shared buffer is assigned a virtual queue, which is a counter. To achieve a specific kind of fairness, an appropriate scheduler is applied to the virtual queues. Each VC's bottleneck rate (fair share) is derived from its virtual cell departure rate. This approach of deriving a VC's fair share is simple and accurate. By controlling each VC with respect to its virtual queue and queue build-up in the shared buffer, network congestion is avoided. The principle of the control scheme is first illustrated by max–min flow control, which is realised by scheduling the virtual queues in round-robin. Further application of the control scheme is demonstrated with the achievement of weighted fairness through weighted round robin scheduling. Simulation results show that with a simple computation, the proposed scheme achieves the desired fairness exactly and controls network congestion effectively.