942 resultados para Assumed-strains
Resumo:
Simulation study of a custom power park (CPP) is presented. It is assumed that the park contains unbalanced and nonlinear loads in addition to a sensitive load. Two different types of compensators are used separately to protect the sensitive load against unbalance and distortion caused by the other loads. It has been shown that a shunt compensator can regulate the voltage of the CPP bus, whereas the series compensator can only regulate the sensitive load terminal voltage. Additional issues such as the load transfer through a static transfer switch, detection of sag/fault etc. are also discussed. The concepts are validated through PSCAD/EMTDC simulation studies on a sample distribution system.
Resumo:
Most statistical methods use hypothesis testing. Analysis of variance, regression, discrete choice models, contingency tables, and other analysis methods commonly used in transportation research share hypothesis testing as the means of making inferences about the population of interest. Despite the fact that hypothesis testing has been a cornerstone of empirical research for many years, various aspects of hypothesis tests commonly are incorrectly applied, misinterpreted, and ignored—by novices and expert researchers alike. On initial glance, hypothesis testing appears straightforward: develop the null and alternative hypotheses, compute the test statistic to compare to a standard distribution, estimate the probability of rejecting the null hypothesis, and then make claims about the importance of the finding. This is an oversimplification of the process of hypothesis testing. Hypothesis testing as applied in empirical research is examined here. The reader is assumed to have a basic knowledge of the role of hypothesis testing in various statistical methods. Through the use of an example, the mechanics of hypothesis testing is first reviewed. Then, five precautions surrounding the use and interpretation of hypothesis tests are developed; examples of each are provided to demonstrate how errors are made, and solutions are identified so similar errors can be avoided. Remedies are provided for common errors, and conclusions are drawn on how to use the results of this paper to improve the conduct of empirical research in transportation.
Resumo:
The following paper proposes a novel application of Skid-to-Turn maneuvers for fixed wing Unmanned Aerial Vehicles (UAVs) inspecting locally linear infrastructure. Fixed wing UAVs, following the design of manned aircraft, commonly employ Bank-to-Turn ma- neuvers to change heading and thus direction of travel. Whilst effective, banking an aircraft during the inspection of ground based features hinders data collection, with body fixed sen- sors angled away from the direction of turn and a panning motion induced through roll rate that can reduce data quality. By adopting Skid-to-Turn maneuvers, the aircraft can change heading whilst maintaining wings level flight, thus allowing body fixed sensors to main- tain a downward facing orientation. An Image-Based Visual Servo controller is developed to directly control the position of features as captured by onboard inspection sensors. This improves on the indirect approach taken by other tracking controllers where a course over ground directly above the feature is assumed to capture it centered in the field of view. Performance of the proposed controller is compared against that of a Bank-to-Turn tracking controller driven by GPS derived cross track error in a simulation environment developed to replicate the field of view of a body fixed camera.
Resumo:
The uncontrolled disposal of solid wastes poses an immediate threat to public health and a long term threat to the environmental well being of future generations. Solid waste is waste resulting from human activities that is solid and unwanted (Peavy et al., 1985). If unmanaged, dumped solid wastes generate liquid and gaseous emissions that are detrimental to the environment. This can lead to a serious form of contamination known as metal contamination, which poses a risk to human health and ecosystems. For example, some heavy metals (cadmium, chromium compounds, and nickel tetracarbonyl) are known to be highly toxic, and are aggressive at elevated concentrations. Iron, copper, and manganese can cause staining, and aluminium causes depositions and discolorations. In addition, calcium and magnesium cause hardness in water causing scale deposition and scum formation. Though not a metal but a metalloid, arsenic is poisonous at relatively high concentrations and when diluted at low concentrations causes skin cancer. Normally, metal contaminants are found in a dissolved form in the liquid percolating through landfills. Because average metal concentrations from full-scale landfills, test cells, and laboratory studies have tended to be generally low, metal contamination originating from landfills is not generally considered a major concern (Kjeldsen et al., 2002; Christensen et al., 1999). However, a number of factors make it necessary to take a closer look at metal contaminants from landfills. One of these factors relates to variability. Landfill leachate can have different qualities depending on the weather and operating conditions. Therefore, at one moment in time, metal contaminant concentrations may be quite low, but at a later time these concentrations could be quite high. Also, these conditions relate to the amount of leachate that is being generated. Another factor is biodiversity. It cannot be assumed that a particular metal contaminant is harmless to flora and fauna (including micro organisms) just because it is harmless to human health. This has significant implications for ecosystems and the environment. Finally, there is the moral factor. Because uncertainty surrounds the potential effects of metal contamination, it is appropriate to take precautions to prevent it from taking place. Consequently, it is necessary to have good scientific knowledge (empirically supported) to adequately understand the extent of the problem and improve the way waste is being disposed of
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
In this paper, we present a microphone array beamforming approach to blind speech separation. Unlike previous beamforming approaches, our system does not require a-priori knowledge of the microphone placement and speaker location, making the system directly comparable other blind source separation methods which require no prior knowledge of recording conditions. Microphone location is automatically estimated using an assumed noise field model, and speaker locations are estimated using cross correlation based methods. The system is evaluated on the data provided for the PASCAL Speech Separation Challenge 2 (SSC2), achieving a word error rate of 58% on the evaluation set.
Resumo:
Since 1996, ther provision of a refuge floor has been a mandatory feature for all new tall buildings in Hong Kong. These floors are designed to provide for building occupants a fire safe environment that is also free from smoke. However, the desired cross ventilation on these floors to achieve the removal of smoke, assumed by the Building Codes of Hong Kong, is still being questioned so that a further scientific study of the wind-induced ventilation of a refuge fllor is needed. This paper presents an investigation into this issue. The developed computational technique used in this paper was adopted to study the wind-induced natural ventilation on a refuge floor. The aim of the investigation was to establish whether a refuge floor with a cetnral core and having cross ventilation produced by only two open opposite external side walls on the refuge floor would provide the required protection in all situations taking into account behaviour of wind due to different floor heights, wall boundary conditions and turbulence intensity profiles. The results revealed that natural ventilation can be increased by increasng the floor heigh provided the wind angle to the building is less than 90 degrees. The effectiveness of the solution was greatly reduced when the wind was blowing at 90 degrees to the refuge floor opening.
Resumo:
Adherence to medicines is a major determinant of the effectiveness of medicines. However, estimates of non-adherence in the older-aged with chronic conditions vary from 40 to 75%. The problems caused by non-adherence in the older-aged include residential care and hospital admissions, progression of the disease, and increased costs to society. The reasons for non-adherence in the older-aged include items related to the medicine (e.g. cost, number of medicines, adverse effects) and those related to person (e.g. cognition, vision, depression). It is also known that there are many ways adherence can be increased (e.g. use of blister packs, cues). It is assumed that interventions by allied health professions, including a discussion of adherence, will improve adherence to medicines in the older aged but the evidence for this has not been reviewed. There is some evidence that telephone counselling about adherence by a nurse or pharmacist does improve adherence, short- and long-term. However, face-to-face intervention counselling at the pharmacy, or during a home visit by a pharmacist, has shown variable results with some studies showing improved adherence and some not. Education programs during hospital stays have not been shown to improve adherence on discharge, but education programs for subjects with hypertension have been shown to improve adherence. In combination with an education program, both counselling and a medicine review program have been shown to improve adherence short-term in the older-aged. Thus, there are many unanswered questions about the most effective interventions to promote adherence. More studies are needed to determine the most appropriate interventions by allied health professions, and these need to consider the disease state, demographics, and socio-economic status of the older-aged subject, and the intensity and duration of intervention needed.
Resumo:
This chapter aims to situate values education as a core component of social science pre-service teacher education. In particular, it reflects on an experiment in embedding a values laden Global Education perspective in a fourth year social science curriculum method unit. This unit was designed and taught by the researcher on the assumption that beginning social science teachers need to be empowered with pedagogical skills and new dispositions to deal with value laden emerging global and regional concerns in their secondary school classrooms. Moreover, it was assumed that when pre-service teachers engage in dynamic and interactive learning experiences in their curriculum unit, they commence the process of ‘capacity building’ those skills which prepare them for their own lifelong professional learning. This approach to values education also aimed at providing pre-service teachers with opportunities to ‘create deep understandings of teaching and learning’ (Barnes, 1989, p. 17) by reflecting on the ways in which ‘pedagogy can be transformative’ (Lovat and Toomey, 2011 add page no from Chapter One). It was assumed that this tertiary experience would foster the sine qua non of teaching – a commitment to students and their learning. Central to fostering new ‘dispositions’ through this approach, was the belief in the power of pedagogy to make the difference in enhancing student participation and learning. In this sense, this experiment in values education in secondary social science pre-service teacher education aligns with the Troika metaphor for a paradigm change, articulated by Lovat and Toomey (2009) in Chapter One.
Resumo:
This chapter explores the perceptions of middle years specialist teachers in the contemporary Australian schools context. Written narratives were obtained from 4 Australian teachers. Each has followed distinctly different paths to teaching in the middle years. However, each has a high leadership profile in the general schooling sector assumed relatively early in their professional careers. These teachers were asked about their entry into teaching, the pathways they pursued to teaching at the middle level, opportunities and limitations experienced for them in schools, and their conceptions of the future of middle years reforms in Australia.
Resumo:
The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.
Resumo:
Introduction The Australian Nurse Practitioner Project (AUSPRAC) was initiated to examine the introduction of nurse practitioners into the Australian health service environment. The nurse practitioner concept was introduced to Australia over two decades ago and has been evolving since. Today, however, the scope of practice, role and educational preparation of nurse practitioners is well defined (Gardner et al, 2006). Amendments to specific pre-existing legislation at a State level have permitted nurse practitioners to perform additional activities including some once in the domain of the medical profession. In the Australian Capital Territory, for example 13 diverse Acts and Regulations required amendments and three new Acts were established (ACT Health, 2006). Nurse practitioners are now legally authorized to diagnose, treat, refer and prescribe medications in all Australian states and territories. These extended practices differentiate nurse practitioners from other advanced practice roles in nursing (Gardner, Chang & Duffield, 2007). There are, however, obstacles for nurse practitioners wishing to use these extended practices. Restrictive access to Medicare funding via the Medicare Benefit Scheme (MBS) and the Pharmaceutical Benefit Scheme (PBS) limit the scope of nurse practitioner service in the private health sector and community settings. A recent survey of Australian nurse practitioners (n=202) found that two-thirds of respondents (66%) stated that lack of legislative support limited their practice. Specifically, 78% stated that lack of a Medicare provider number was ‘extremely limiting’ to their practice and 71% stated that no access to the PBS was ‘extremely limiting’ to their practice (Gardner et al, in press). Changes to Commonwealth legislation is needed to enable nurse practitioners to prescribe medication so that patients have access to PBS subsidies where they exist; currently patients with scripts which originated from nurse practitioners must pay in full for these prescriptions filled outside public hospitals. This report presents findings from a sub-study of Phase Two of AUSPRAC. Phase Two was designed to enable investigation of the process and activities of nurse practitioner service. Process measurements of nurse practitioner services are valuable to healthcare organisations and service providers (Middleton, 2007). Processes of practice can be evaluated through clinical audit, however as Middleton cautions, no direct relationship between these processes and patient outcomes can be assumed.
Resumo:
Modelling of water flow and associated deformation in unsaturated reactive soils (shrinking/swelling soils) is important in many applications. The current paper presents a method to capture soil swelling deformation during water infiltration using Particle Image Velocimetry (PIV). The model soil material used is a commercially available bentonite. A swelling chamber was setup to determine the water content profile and extent of soil swelling. The test was run for 61 days, and during this time period, the soil underwent on average across its width swelling of about 26% of the height of the soil column. PIV analysis was able to determine the amount of swelling that occurred within the entire face of the soil box that was used for observations. The swelling was most apparent in the top layers with strains in most cases over 100%.
Resumo:
Background: A bundled approach to central venous catheter care is currently being promoted as an effective way of preventing catheter-related bloodstream infection (CR-BSI). Consumables used in the bundled approach are relatively inexpensive which may lead to the conclusion that the bundle is cost-effective. However, this fails to consider the nontrivial costs of the monitoring and education activities required to implement the bundle, or that alternative strategies are available to prevent CR-BSI. We evaluated the cost-effectiveness of a bundle to prevent CR-BSI in Australian intensive care patients. ---------- Methods and Findings: A Markov decision model was used to evaluate the cost-effectiveness of the bundle relative to remaining with current practice (a non-bundled approach to catheter care and uncoated catheters), or use of antimicrobial catheters. We assumed the bundle reduced relative risk of CR-BSI to 0.34. Given uncertainty about the cost of the bundle, threshold analyses were used to determine the maximum cost at which the bundle remained cost-effective relative to the other approaches to infection control. Sensitivity analyses explored how this threshold alters under different assumptions about the economic value placed on bed-days and health benefits gained by preventing infection. If clinicians are prepared to use antimicrobial catheters, the bundle is cost-effective if national 18-month implementation costs are below $1.1 million. If antimicrobial catheters are not an option the bundle must cost less than $4.3 million. If decision makers are only interested in obtaining cash-savings for the unit, and place no economic value on either the bed-days or the health benefits gained through preventing infection, these cost thresholds are reduced by two-thirds.---------- Conclusions: A catheter care bundle has the potential to be cost-effective in the Australian intensive care setting. Rather than anticipating cash-savings from this intervention, decision makers must be prepared to invest resources in infection control to see efficiency improvements.