410 resultados para frequency measures
Resumo:
Safety at roadway intersections is of significant interest to transportation professionals due to the large number of intersections in transportation networks, the complexity of traffic movements at these locations that leads to large numbers of conflicts, and the wide variety of geometric and operational features that define them. A variety of collision types including head-on, sideswipe, rear-end, and angle crashes occur at intersections. While intersection crash totals may not reveal a site deficiency, over exposure of a specific crash type may reveal otherwise undetected deficiencies. Thus, there is a need to be able to model the expected frequency of crashes by collision type at intersections to enable the detection of problems and the implementation of effective design strategies and countermeasures. Statistically, it is important to consider modeling collision type frequencies simultaneously to account for the possibility of common unobserved factors affecting crash frequencies across crash types. In this paper, a simultaneous equations model of crash frequencies by collision type is developed and presented using crash data for rural intersections in Georgia. The model estimation results support the notion of the presence of significant common unobserved factors across crash types, although the impact of these factors on parameter estimates is found to be rather modest.
Resumo:
In rural low-voltage networks, distribution lines are usually highly resistive. When many distributed generators are connected to such lines, power sharing among them is difficult when using conventional droop control, as the real and reactive power have strong coupling with each other. A high droop gain can alleviate this problem but may lead the system to instability. To overcome4 this, two droop control methods are proposed for accurate load sharing with frequency droop controller. The first method considers no communication among the distributed generators and regulates the output voltage and frequency, ensuring acceptable load sharing. The droop equations are modified with a transformation matrix based on the line R/X ration for this purpose. The second proposed method, with minimal low bandwidth communication, modifies the reference frequency of the distributed generators based on the active and reactive power flow in the lines connected to the points of common coupling. The performance of these two proposed controllers is compared with that of a controller, which includes an expensive high bandwidth communication system through time-domain simulation of a test system. The magnitude of errors in power sharing between these three droop control schemes are evaluated and tabulated.
Resumo:
As the use of renewable energy sources (RESs) increases worldwide, there is a rising interest on their impacts on power system operation and control. An overview of the key issues and new challenges on frequency regulation concerning the integration of renewable energy units into the power systems is presented. Following a brief survey on the existing challenges and recent developments, the impact of power fluctuation produced by variable renewable sources (such as wind and solar units) on sysstem frequency performance is also presented. An updated LFC model is introduced, and power system frequency response in the presence of RESs and associated issues is analysed. The need for the revising of frequency performance standards is emphasised. Finally, non-linear time-domain simulations on the standard 39-bus and 24-bus test systems show that the simulated results agree with those predicted analytically.
Resumo:
Aim: This paper is a report of a study conducted to determine the effectiveness of a community case management collaborative education intervention in terms of satisfaction, learning and performance among public health nurses. Background: Previous evaluation studies of case management continuing professional education often failed to demonstrate effectiveness across a range of outcomes and had methodological weaknesses such as small convenience samples and lack of control groups. Method: A cluster randomised controlled trial was conducted between September 2005 and February 2006. Ten health centre clusters (5 control, 5 intervention) recruited 163 public health nurses in Taiwan to the trial. After pre-tests for baseline measurements, public health nurses in intervention centres received an educational intervention of four half-day workshops. Post-tests for both groups were conducted after the intervention. Two-way repeated measures analysis of variance was performed to evaluate the effect of the intervention on target outcomes. Results: A total of 161 participants completed the pre- and post-intervention measurements. This was almost a 99% response rate. Results revealed that 97% of those in the experimental group were satisfied with the programme. There were statistically significant differences between the two groups in knowledge (p = 0.001), confidence in case management skills (p = 0.001), preparedness for case manager role activities (p = 0.001), self-reported frequency in using skills (p = 0.001), and role activities (p = 0.004). Conclusion: Collaboration between academic and clinical nurses is an effective strategy to prepare nurses for rapidly-changing roles.
Resumo:
Hazard perception in driving involves a number of different processes. This paper reports the development of two measures designed to separate these processes. A Hazard Perception Test was developed to measure how quickly drivers could anticipate hazards overall, incorporating detection, trajectory prediction, and hazard classification judgements. A Hazard Change Detection Task was developed to measure how quickly drivers can detect a hazard in a static image regardless of whether they consider it hazardous or not. For the Hazard Perception Test, young novices were slower than mid-age experienced drivers, consistent with differences in crash risk, and test performance correlated with scores in pre-existing Hazard Perception Tests. For drivers aged 65 and over, scores on the Hazard Perception Test declined with age and correlated with both contrast sensitivity and a Useful Field of View measure. For the Hazard Change Detection Task, novices responded quicker than the experienced drivers, contrary to crash risk trends, and test performance did not correlate with measures of overall hazard perception. However for drivers aged 65 and over, test performance declined with age and correlated with both hazard perception and Useful Field of View. Overall we concluded that there was support for the validity of the Hazard Perception Test for all ages but the Hazard Change Detection Task might only be appropriate for use with older drivers.
Resumo:
Seven endemic governance problems are shown to be currently present in governments around the globe and at any level of government as well (for example municipal, federal). These problems have their roots traced back through more than two thousand years of political, specifically ‘democratic’, history. The evidence shows that accountability, transparency, corruption, representation, campaigning methods, constitutionalism and long-term goals were problematic for the ancient Athenians as well as modern international democratisation efforts encompassing every major global region. Why then, given the extended time period humans have had to deal with these problems, are they still present? At least part of the answer to this question is that philosophers, academics and NGOs as well as MNOs have only approached these endemic problems in a piecemeal manner with a skewed perspective on democracy. Their works have also been subject to the ebbs and flows of human history which essentially started and stopped periods of thinking. In order to approach the investigation of endemic problems in relation to democracy (as the overall quest of this thesis was to generate prescriptive results for the improvement of democratic government), it was necessary to delineate what exactly is being written about when using the term ‘democracy’. It is common knowledge that democracy has no one specific definition or practice, even though scholars and philosophers have been attempting to create a definition for generations. What is currently evident, is that scholars are not approaching democracy in an overly simplified manner (that is, it is government for the people, by the people) but, rather, are seeking the commonalities that democracies share, in other words, those items which are common to all things democratic. Following that specific line of investigation, the major practiced and theoretical versions of democracy were thematically analysed. After that, their themes were collapsed into larger categories, at which point the larger categories were comparatively analysed with the practiced and theoretical versions of democracy. Four democratic ‘particles’ (selecting officials, law, equality and communication) were seen to be present in all practiced and theoretical democratic styles. The democratic particles fused with a unique investigative perspective and in-depth political study created a solid conceptualisation of democracy. As such, it is argued that democracy is an ever-present element of any state government, ‘democratic’ or not, and the particles are the bodies which comprise the democratic element. Frequency- and proximity-based analyses showed that democratic particles are related to endemic problems in international democratisation discourse. The linkages between democratic particles and endemic problems were also evident during the thematic analysis as well historical review. This ultimately led to the viewpoint that if endemic problems are mitigated the act may improve democratic particles which might strengthen the element of democracy in the governing apparatus of any state. Such may actively minimise or wholly displace inefficient forms of government, leading to a government specifically tailored to the population it orders. Once the theoretical and empirical goals were attained, this thesis provided some prescriptive measures which government, civil society, academics, professionals and/or active citizens can use to mitigate endemic problems (in any country and at any level of government) so as to improve the human condition via better democratic government.
Resumo:
Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.
Resumo:
The importance of constructively aligned curriculum is well understood in higher education. Based on the principles of constructive alignment, this research considers whether student perception of learning achievement measures can be used to gain insights into how course activities and pedagogy are assisting or hindering students in accomplishing course learning goals. Students in a Marketing Principles course were asked to complete a voluntary survey rating their own progress on the intended learning goals for the course. Student perceptions of learning achievement were correlated with actual student learning, as measured by grade, suggesting that student perceptions of learning achievement measures are suitable for higher educators. Student perception of learning achievement measures provide an alternate means to understand whether students are learning what was intended, which is particularly useful for educators faced with large classes and associated restrictions on assessment. Further, these measures enable educators to simultaneously gather evidence to document the impact of teaching innovations on student learning. Further implications for faculty and future research are offered.
Resumo:
A protein-truncating variant of CHEK2, 1100delC, is associated with a moderate increase in breast cancer risk. We have determined the prevalence of this allele in index cases from 300 Australian multiple-case breast cancer families, 95% of which had been found to be negative for mutations in BRCA1 and BRCA2. Only two (0.6%) index cases heterozygous for the CHEK2 mutation were identified. All available relatives in these two families were genotyped, but there was no evidence of co-segregation between the CHEK2 variant and breast cancer. Lymphoblastoid cell lines established from a heterozygous carrier contained approximately 20% of the CHEK2 1100delC mRNA relative to wild-type CHEK2 transcript. However, no truncated CHK2 protein was detectable. Analyses of expression and phosphorylation of wild-type CHK2 suggest that the variant is likely to act by haploinsufficiency. Analysis of CDC25A degradation, a downstream target of CHK2, suggests that some compensation occurs to allow normal degradation of CDC25A. Such compensation of the 1100delC defect in CHEK2 might explain the rather low breast cancer risk associated with the CHEK2 variant, compared to that associated with truncating mutations in BRCA1 or BRCA2.
Resumo:
Background: Alcohol craving is associated with greater alcohol-related problems and less favorable treatment prognosis. The Obsessive Compulsive Drinking Scale (OCDS) is the most widely used alcohol craving instrument. The OCDS has been validated in adults with alcohol use disorders (AUDs), which typically emerge in early adulthood. This study examines the validity of the OCDS in a nonclinical sample of young adults. Methods: Three hundred and nine college students (mean age of 21.8 years, SD = 4.6 years) completed the OCDS, Alcohol Use Disorders Identification Test (AUDIT), and measures of alcohol consumption. Subjects were randomly allocated to 2 samples. Construct validity was examined via exploratory factor analysis (n = 155) and confirmatory factor analysis (n = 154). Concurrent validity was assessed using the AUDIT and measures of alcohol consumption. A second, alcohol-dependent sample (mean age 42 years, SD 12 years) from a previously published study (n = 370) was used to assess discriminant validity. Results: A unique young adult OCDS factor structure was validated, consisting of Interference/Control, Frequency of Obsessions, Alcohol Consumption and Resisting Obsessions/Compulsions. The young adult 4-factor structure was significantly associated with the AUDIT and alcohol consumption. The 4 factor OCDS successfully classified nonclinical subjects in 96.9% of cases and the older alcohol-dependent patients in 83.7% of cases. Although the OCDS was able to classify college nonproblem drinkers (AUDIT <13, n = 224) with 83.2% accuracy, it was no better than chance (49.4%) in classifying potential college problem drinkers (AUDIT score ≥13, n = 85). Conclusions: Using the 4-factor structure, the OCDS is a valid measure of alcohol craving in young adult populations. In this nonclinical set of students, the OCDS classified nonproblem drinkers well but not problem drinkers. Studies need to further examine the utility of the OCDS in young people with alcohol misuse.
Resumo:
Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.
Resumo:
This paper develops a composite participation index (PI) to identify patterns of transport disadvantage in space and time. It is operationalised using 157 weekly activity-travel diaries data collected from three case study areas in rural Northern Ireland. A review of activity space and travel behaviour research found that six dimensional indicators of activity spaces were typically used including the number of unique locations visited, distance travelled, area of activity spaces, frequency of activity participation, types of activity participated in, and duration of participation in order to identify transport disadvantage. A combined measure using six individual indices were developed based on the six dimensional indicators of activity spaces, by taking into account the relativity of the measures for weekdays, weekends, and for a week. Factor analyses were conducted to derive weights of these indices to form the PI measure. Multivariate analysis using general linear models of the different indicators/indices identified new patterns of transport disadvantage. The research found that: indicator based measures and index based measures are complement each other; interactions between different factors generated new patterns of transport disadvantage; and that these patterns vary in space and time. The analysis also indicates that the transport needs of different disadvantaged groups are varied.
Resumo:
This paper identifies transport disadvantage using a 7 day activity-travel diary data from two rural case study areas. A composite participation index (PI) measure was developed for this study based on six indices measuring elements of travel and activity participation. Using the index the paper then goes on to compare these results, with the results obtained from other more traditional indicators used to identify transport disadvantage. These indicators are related to the size of activity space such as unique network distance travelled, number of unique locations visited, activity space area, activity duration, and fullness (shape) of activity spaces. The weaknesses of these indicator based measures are that: firstly, they do not take into account the relativity of the measure between different areas i.e. travel distance in terms of the wider context of available activities within an area; and secondly, these indicators are multi-dimensional and each represents a different qualitative aspect of travel and activity participation. As a result, six individual indices were developed to overcome these problems. These include: participation count index, participation length index, participation area index, participation duration index, participation type index, and participation frequency index. These are then aggregated to assess the relative performance in terms of these different indices and identify the nature of transport disadvantage. GIS was used to visualise individual travel patterns and to derive scores for both the indicator based measures and the index based measures. Factor analysis was conducted to derive weights of the individual indices to form the composite index measure. From this analysis, two intermediate indices were also derived using the underlying factors of the data related to these indices. Using the scores of all these measures, multiple regression analyses were conducted to identify patterns of transport disadvantage.
Resumo:
Suburbanisation has been internationally a major phenomenon in the last decades. Suburb-to-suburb routes are nowadays the most widespread road journeys; and this resulted in an increment of distances travelled, particularly on faster suburban highways. The design of highways tends to over-simplify the driving task and this can result in decreased alertness. Driving behaviour is consequently impaired and drivers are then more likely to be involved in road crashes. This is particularly dangerous on highways where the speed limit is high. While effective countermeasures to this decrement in alertness do not currently exist, the development of in-vehicle sensors opens avenues for monitoring driving behaviour in real-time. The aim of this study is to evaluate in real-time the level of alertness of the driver through surrogate measures that can be collected from in-vehicle sensors. Slow EEG activity is used as a reference to evaluate driver's alertness. Data are collected in a driving simulator instrumented with an eye tracking system, a heart rate monitor and an electrodermal activity device (N=25 participants). Four different types of highways (driving scenario of 40 minutes each) are implemented through the variation of the road design (amount of curves and hills) and the roadside environment (amount of buildings and traffic). We show with Neural Networks that reduced alertness can be detected in real-time with an accuracy of 92% using lane positioning, steering wheel movement, head rotation, blink frequency, heart rate variability and skin conductance level. Such results show that it is possible to assess driver's alertness with surrogate measures. Such methodology could be used to warn drivers of their alertness level through the development of an in-vehicle device monitoring in real-time drivers' behaviour on highways, and therefore it could result in improved road safety.
Resumo:
The focus of the present research was to investigate how Local Governments in Queensland were progressing with the adoption of delineated DM policies and supporting guidelines. The study consulted Local Government representatives and hence, the results reflect their views on these issues. Is adoption occurring? To what degree? Are policies and guidelines being effectively implemented so that the objective of a safer, more resilient community is being achieved? If not, what are the current barriers to achieving this, and can recommendations be made to overcome these barriers? These questions defined the basis on which the present study was designed and the survey tools developed. While it was recognised that LGAQ and Emergency Management Queensland (EMQ) may have differing views on some reported issues, it was beyond the scope of the present study to canvass those views. The study resolved to document and analyse these questions under the broad themes of: • Building community capacity (notably via community awareness). • Council operationalisation of DM. • Regional partnerships (in mitigation/adaptation). Data was collected via a survey tool comprising two components: • An online questionnaire survey distributed via the LGAQ Disaster Management Alliance (hereafter referred to as the “Alliance”) to DM sections of all Queensland Local Government Councils; and • a series of focus groups with selected Queensland Councils