278 resultados para Serviceability limits
Resumo:
Traffic congestion is an increasing problem with high costs in financial, social and personal terms. These costs include psychological and physiological stress, aggressivity and fatigue caused by lengthy delays, and increased likelihood of road crashes. Reliable and accurate traffic information is essential for the development of traffic control and management strategies. Traffic information is mostly gathered from in-road vehicle detectors such as induction loops. Traffic Message Chanel (TMC) service is popular service which wirelessly send traffic information to drivers. Traffic probes have been used in many cities to increase traffic information accuracy. A simulation to estimate the number of probe vehicles required to increase the accuracy of traffic information in Brisbane is proposed. A meso level traffic simulator has been developed to facilitate the identification of the optimal number of probe vehicles required to achieve an acceptable level of traffic reporting accuracy. Our approach to determine the optimal number of probe vehicles required to meet quality of service requirements, is to simulate runs with varying numbers of traffic probes. The simulated traffic represents Brisbane’s typical morning traffic. The road maps used in simulation are Brisbane’s TMC maps complete with speed limits and traffic lights. Experimental results show that that the optimal number of probe vehicles required for providing a useful supplement to TMC (induction loop) data lies between 0.5% and 2.5% of vehicles on the road. With less probes than 0.25%, little additional information is provided, while for more probes than 5%, there is only a negligible affect on accuracy for increasingly many probes on the road. Our findings are consistent with on-going research work on traffic probes, and show the effectiveness of using probe vehicles to supplement induction loops for accurate and timely traffic information.
Resumo:
Aims: Influenza is commonly spread by infectious aerosols; however, detection of viruses in aerosols is not sensitive enough to confirm the characteristics of virus aerosols. The aim of this study was to develop an assay for respiratory viruses sufficiently sensitive to be used in epidemiological studies. Method: A two-step, nested real-time PCR assay was developed for MS2 bacteriophage, and for influenza A and B, parainfluenza 1 and human respiratory syncytial virus. Outer primer pairs were designed to nest each existing real-time PCR assay. The sensitivities of the nested real-time PCR assays were compared to those of existing real-time PCR assays. Both assays were applied in an aerosol study to compare their detection limits in air samples. Conclusions: The nested real-time PCR assays were found to be several logs more sensitive than the real-time PCR assays, with lower levels of virus detected at lower Ct values. The nested real-time PCR assay successfully detected MS2 in air samples, whereas the real-time assay did not. Significance and Impact of the Study: The sensitive assays for respiratory viruses will permit further research using air samples from naturally generated virus aerosols. This will inform current knowledge regarding the risks associated with the spread of viruses through aerosol transmission.
Groundwater flow model of the Logan river alluvial aquifer system Josephville, South East Queensland
Resumo:
The study focuses on an alluvial plain situated within a large meander of the Logan River at Josephville near Beaudesert which supports a factory that processes gelatine. The plant draws water from on site bores, as well as the Logan River, for its production processes and produces approximately 1.5 ML per day (Douglas Partners, 2004) of waste water containing high levels of dissolved ions. At present a series of treatment ponds are used to aerate the waste water reducing the level of organic matter; the water is then used to irrigate grazing land around the site. Within the study the hydrogeology is investigated, a conceptual groundwater model is produced and a numerical groundwater flow model is developed from this. On the site are several bores that access groundwater, plus a network of monitoring bores. Assessment of drilling logs shows the area is formed from a mixture of poorly sorted Quaternary alluvial sediments with a laterally continuous aquifer comprised of coarse sands and fine gravels that is in contact with the river. This aquifer occurs at a depth of between 11 and 15 metres and is overlain by a heterogeneous mixture of silts, sands and clays. The study investigates the degree of interaction between the river and the groundwater within the fluvially derived sediments for reasons of both environmental monitoring and sustainability of the potential local groundwater resource. A conceptual hydrogeological model of the site proposes two hydrostratigraphic units, a basal aquifer of coarse-grained materials overlain by a thick semi-confining unit of finer materials. From this, a two-layer groundwater flow model and hydraulic conductivity distribution was developed based on bore monitoring and rainfall data using MODFLOW (McDonald and Harbaugh, 1988) and PEST (Doherty, 2004) based on GMS 6.5 software (EMSI, 2008). A second model was also considered with the alluvium represented as a single hydrogeological unit. Both models were calibrated to steady state conditions and sensitivity analyses of the parameters has demonstrated that both models are very stable for changes in the range of ± 10% for all parameters and still reasonably stable for changes up to ± 20% with RMS errors in the model always less that 10%. The preferred two-layer model was found to give the more realistic representation of the site, where water level variations and the numerical modeling showed that the basal layer of coarse sands and fine gravels is hydraulically connected to the river and the upper layer comprising a poorly sorted mixture of silt-rich clays and sands of very low permeability limits infiltration from the surface to the lower layer. The paucity of historical data has limited the numerical modelling to a steady state one based on groundwater levels during a drought period and forecasts for varying hydrological conditions (e.g. short term as well as prolonged dry and wet conditions) cannot reasonably be made from such a model. If future modelling is to be undertaken it is necessary to establish a regular program of groundwater monitoring and maintain a long term database of water levels to enable a transient model to be developed at a later stage. This will require a valid monitoring network to be designed with additional bores required for adequate coverage of the hydrogeological conditions at the Josephville site. Further investigations would also be enhanced by undertaking pump testing to investigate hydrogeological properties in the aquifer.
Resumo:
The novel manuscript Girl in the Shadows tells the story of two teenage girls whose friendship, safety and sanity are pushed to the limits when an unexplained phenomenon invades their lives. Sixteen-year-old Tash has everything a teenage girl could want: good looks, brains and freedom from her busy parents. But when she looks into her mirror, a stranger’s face stares back at her. Her best friend Mal believes it’s an evil spirit and enters the world of the supernatural to find answers. But spell books and ouija boards cannot fix a problem that comes from deep within the soul. It will take a journey to the edge of madness for Tash to face the truth inside her heart and see the evil that lurks in her home. And Mal’s love and courage to pull her back into life. The exegesis examines resilience and coping strategies in adolescence, in particular, the relationship of trauma to brain development in children and teenagers. It draws on recent discoveries in neuroscience and psychology to provide a framework to examine the role of coping strategies in building resilience. Within this broader context, it analyses two works of contemporary young adult fiction, Freaky Green Eyes by Joyce Carol Oates and Sonya Hartnett’s Surrender, their use of the split persona as a coping mechanism within young adult fiction and the potential of young adult literature as a tool to help build resilience in teen readers.
Resumo:
Triage is a process that is critical to the effective management of modern emergency departments. Triage systems aim, not only to ensure clinical justice for the patient, but also to provide an effective tool for departmental organisation, monitoring and evaluation. Over the last 20 years, triage systems have been standardised in a number of countries and efforts made to ensure consistency of application. However, the ongoing crowding of emergency departments resulting from access block and increased demand has led to calls for a review of systems of triage. In addition, international variance in triage systems limits the capacity for benchmarking. The aim of this paper is to provide a critical review of the literature pertaining to emergency department triage in order to inform the direction for future research. While education, guidelines and algorithms have been shown to reduce triage variation, there remains significant inconsistency in triage assessment arising from the diversity of factors determining the urgency of any individual patient. It is timely to accept this diversity, what is agreed, and what may be agreeable. It is time to develop and test an International Triage Scale (ITS) which is supported by an international collaborative approach towards a triage research agenda. This agenda would seek to further develop application and moderating tools and to utilise the scales for international benchmarking and research programmes.
Resumo:
Purpose: To determine (a) the effect of different sunglass tint colorations on traffic signal detection and recognition for color normal and color deficient observers, and (b) the adequacy of coloration requirements in current sunglass standards. Methods: Twenty color-normals and 49 color-deficient males performed a tracking task while wearing sunglasses of different colorations (clear, gray, green, yellow-green, yellow-brown, red-brown). At random intervals, simulated traffic light signals were presented against a white background at 5° to the right or left and observers were instructed to identify signal color (red/yellow/green) by pressing a response button as quickly as possible; response times and response errors were recorded. Results: Signal color and sunglass tint had significant effects on response times and error rates (p < 0.05), with significant between-color group differences and interaction effects. Response times for color deficient people were considerably slower than color normals for both red and yellow signals for all sunglass tints, but for green signals they were only noticeably slower with the green and yellow-green lenses. For most of the color deficient groups, there were recognition errors for yellow signals combined with the yellow-green and green tints. In addition, deuteranopes had problems for red signals combined with red-brown and yellow-brown tints, and protanopes had problems for green signals combined with the green tint and for red signals combined with the red-brown tint. Conclusions: Many sunglass tints currently permitted for drivers and riders cause a measurable decrement in the ability of color deficient observers to detect and recognize traffic signals. In general, combinations of signals and sunglasses of similar colors are of particular concern. This is prima facie evidence of a risk in the use of these tints for driving and cautions against the relaxation of coloration limits in sunglasses beyond those represented in the study.
Resumo:
Avatars perform a complex range of inter-related functions. They not only allow us to express a digital identity, they facilitate the expression of physical motility and, through non-verbal expression, help to mediate social interaction in networked environments. When well designed, they can contribute to a sense of “presence” (a sense of being there) and a sense of “co-presence” (a sense of being there with others) in digital space. Because of this complexity, the study of avatars can be enriched by theoretical insights from a range of disciplines. This paper considers avatars from the perspectives of critical theory, visual communication, and art theory (on portraiture) to help elucidate the role of avatars as an expression of identity. It goes on to argue that identification with an avatar is also produced through their expression of motility and discusses the benefits of film theory for explaining this process. Conceding the limits of this approach, the paper draws on philosophies of body image, Human Computer Interaction (HCI) theory on embodied interaction, and fields as diverse as dance to explain the sense of identification, immersion, presence and co-presence that avatars can produce.
Resumo:
This paper developed a model for rostering ambulance crew in order to maximise the coverage throughout a planning horizon and minimise the number of ambulance crew. Rostering Ambulance Services is a complex task, which considers a large number of conflicting rules related to various aspects such as limits on the number of consecutive work hours, the number of shifts worked by each ambulance staff and restrictions on the type of shifts assigned. The two-stage models are developed using nonlinear integer programming technique to determine the following sub-problems: the shift start times; the number of staff required to work for each shift; and a balanced schedule of ambulance staff. At the first stage, the first two sub-problems have been solved. At the second stage, the third sub-problem has been solved using the first stage outputs. Computational experiments with real data are conducted and the results of the models are presented.
Resumo:
Thirteen papers examine Asian and European experiences with developing national and city policy agendas around cultural and creative industries. Papers discuss policy transfer and the field of the cultural and creative industries--what can be learned from Europe; creative industries across cultural borders--the case of video games in Asia; spaces of culture and economy--mapping the cultural-creative cluster landscape; beyond networks and relations--toward rethinking creative cluster theory; the capital complex--Beijing's new creative clusters; the European creative class and regional development--how relevant Richard Florida's theory is for Europe; getting out of place--the mobile creative class taking on the local--a U.K. perspective on the creative class; Asian cities and limits to creative capital theory; the creative industries, governance, and economic development--a U.K. perspective; Shanghai's emergence into the global creative economy; Shanghai moderne--creative economy in a creative city?; urbanity as a political project--toward post-national European cities; and alternative policies in urban innovation. Contributors include economists. Kong is with the Department of Geography at the National University of Singapore. O'Connor is at Queensland University of Technology. Index.
Resumo:
This article explores the contributions of two unique Australian women, Annette Kellerman and Florence Broadhurst, to global fashion and aesthetics through subverting and challenging female gender roles of the early twentieth century. These two women are brought together here as a means of highlighting their markedly contrasting social tactics: undressing versus layering. Kellerman's body became an instrument in her quest for global fame, engaging in daring public "undress" in swimming and diving performances around the world that served to show case her innovative swimwear design. In contrast, Broadhurst, through repeated reconstructions of her persona and constant relayering of identities, concocted versions of herself in order to pass through Shanghai, London, and Sydney societies. Their lives exist as binaristic parallels, expressing contrasting values of un-Australianness - the disavowal of national identity; and Australianness - the promotion of national identity. Both Kellerman and Broadhurst tested the limits of body, dress and national identity as vehicles for global recognition. The recent interest in their historical roles is evidenced in the films "The original Mermaid"( 2004) and "Unfolding Florence" (2005) in addition to numerous books and journal articles. Despite this resurgent public recognition of their lives and achievements, scholarly analysis of their legacies in the fields of fashion and design are still relatively neglected. This article explores their contributions to celebrity and modernity, fashion and gender as modern un-Australian women.
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
This paper reports on a study investigating preferred driving speeds and frequency of speeding of 320 Queensland drivers. Despite growing community concern about speeding and extensive research linking it to road trauma, speeding remains a pervasive, and arguably, socially acceptable behaviour. This presents an apparent paradox regarding the mismatch between beliefs and behaviours, and highlights the necessity to better understand the factors contributing to speeding. Utilising self-reported behaviour and attitudinal measures, results of this study support the notion of a speed paradox. Two thirds of participants agreed that exceeding the limit is not worth the risks nor is it okay to exceed the posted limit. Despite this, more than half (58.4%) of the participants reported a preference to exceed the 100km/hour speed limit, with one third preferring to do so by 10 to 20 km/hour. Further, mean preferred driving speeds on both urban and open roads suggest a perceived enforcement tolerance of 10%, suggesting that posted limits have limited direct influence on speed choice. Factors that significantly predicted the frequency of speeding included: exposure to role models who speed; favourable attitudes to speeding; experiences of punishment avoidance; and the perceived certainty of punishment for speeding. These findings have important policy implications, particularly relating to the use of enforcement tolerances.
Resumo:
While spatial determinants of emmetropization have been examined extensively in animal models and spatial processing of human myopes has also been studied, there have been few studies investigating temporal aspects of emmetropization and temporal processing in human myopia. The influence of temporal light modulation on eye growth and refractive compensation has been observed in animal models and there is evidence of temporal visual processing deficits in individuals with high myopia or other pathologies. Given this, the aims of this work were to examine the relationships between myopia (i.e. degree of myopia and progression status) and temporal visual performance and to consider any temporal processing deficits in terms of the parallel retinocortical pathways. Three psychophysical studies investigating temporal processing performance were conducted in young adult myopes and non-myopes: (1) backward visual masking, (2) dot motion perception and (3) phantom contour. For each experiment there were approximately 30 young emmetropes, 30 low myopes (myopia less than 5 D) and 30 high myopes (5 to 12 D). In the backward visual masking experiment, myopes were also classified according to their progression status (30 stable myopes and 30 progressing myopes). The first study was based on the observation that the visibility of a target is reduced by a second target, termed the mask, presented quickly after the first target. Myopes were more affected by the mask when the task was biased towards the magnocellular pathway; myopes had a 25% mean reduction in performance compared with emmetropes. However, there was no difference in the effect of the mask when the task was biased towards the parvocellular system. For all test conditions, there was no significant correlation between backward visual masking task performance and either the degree of myopia or myopia progression status. The dot motion perception study measured detection thresholds for the minimum displacement of moving dots, the maximum displacement of moving dots and degree of motion coherence required to correctly determine the direction of motion. The visual processing of these tasks is dominated by the magnocellular pathway. Compared with emmetropes, high myopes had reduced ability to detect the minimum displacement of moving dots for stimuli presented at the fovea (20% higher mean threshold) and possibly at the inferior nasal retina. The minimum displacement threshold was significantly and positively correlated to myopia magnitude and axial length, and significantly and negatively correlated with retinal thickness for the inferior nasal retina. The performance of emmetropes and myopes for all the other dot motion perception tasks were similar. In the phantom contour study, the highest temporal frequency of the flickering phantom pattern at which the contour was visible was determined. Myopes had significantly lower flicker detection limits (21.8 ± 7.1 Hz) than emmetropes (25.6 ± 8.8 Hz) for tasks biased towards the magnocellular pathway for both high (99%) and low (5%) contrast stimuli. There was no difference in flicker limits for a phantom contour task biased towards the parvocellular pathway. For all phantom contour tasks, there was no significant correlation between flicker detection thresholds and magnitude of myopia. Of the psychophysical temporal tasks studied here those primarily involving processing by the magnocellular pathway revealed differences in performance of the refractive error groups. While there are a number of interpretations for this data, this suggests that there may be a temporal processing deficit in some myopes that is selective for the magnocellular system. The minimum displacement dot motion perception task appears the most sensitive test, of those studied, for investigating changes in visual temporal processing in myopia. Data from the visual masking and phantom contour tasks suggest that the alterations to temporal processing occur at an early stage of myopia development. In addition, the link between increased minimum displacement threshold and decreasing retinal thickness suggests that there is a retinal component to the observed modifications in temporal processing.
Resumo:
Typical high strength steels (HSS) have exceptional high strengths with improved weldability making the material attractive in modern steel constructions. However, due to lack of understanding, most of the current steel design standards are limited to conventional low strength steels (LSS, i.e. fy ≤ 450 MPa). This paper presents the details of full-scale experimental tests on short beams fabricated from BISPLATE80 HSS materials (nominal fy = 690 MPa). The various slenderness ratios of the plate elements in the test specimens were chosen in the range near the current yield limit (AS4100-1998, etc.). The experimental studies presented in this paper have produced a better understanding of the structural behaviour of HSS members subjected to local instabilities. Comparisons have also presented in the paper regarding to the design predictions from the current steel standards (AS4100-1998). This study has enabled to provide a series of proposals for proper assessment of plate slenderness limits for structural members made of representative HSS materials. This research work also enables the inclusion of further versions in the steel design specifications for typical HSS materials to be used in buildings and bridges. This paper also presents a distribution model of residual stresses in the longitudinal direction for typical HSS I-sections.
Resumo:
High density development has been seen as a contribution to sustainable development. However, a number of engineering issues play a crucial role in the sustainable construction of high rise buildings. Non linear deformation of concrete has an adverse impact on high-rise buildings with complex geometries, due to differential axial shortening. These adverse effects are caused by time dependent behaviour resulting in volume change known as ‘shrinkage’, ‘creep’ and ‘elastic’ deformation. These three phenomena govern the behaviour and performance of all concrete elements, during and after construction. Reinforcement content, variable concrete modulus, volume to surface area ratio of the elements, environmental conditions, and construction quality and sequence influence on the performance of concrete elements and differential axial shortening will occur in all structural systems. Its detrimental effects escalate with increasing height and non vertical load paths resulting from geometric complexity. The magnitude of these effects has a significant impact on building envelopes, building services, secondary systems, and lifetime serviceability and performance. Analytical and test procedures available to quantify the magnitude of these effects are limited to a very few parameters and are not adequately rigorous to capture the complexity of true time dependent material response. With this in mind, a research project has been undertaken to develop an accurate numerical procedure to quantify the differential axial shortening of structural elements. The procedure has been successfully applied to quantify the differential axial shortening of a high rise building, and the important capabilities available in the procedure have been discussed. A new practical concept, based on the variation of vibration characteristic of structure during and after construction and used to quantify the axial shortening and assess the performance of structure, is presented.