985 resultados para Detail (Architektur)
Resumo:
As part of a development plan-in-progress spanning a total of 25 years (1996 to 2020), Malaysia’s Multimedia Super Corridor MSC provides a unique opportunity to witness a brief and microcosmic unfolding of that process which Lewis Mumford lays out in exhaustive detail in Technics and Civilization (Mumford, 1963). What makes it doubly interesting is the interlocking of national imagining, destiny and progress with a specific group of technologies, information and communication technologies (ICT), of which the Internet is part. This paper casts Malaysia’s development and implementation of the MSC as the core round which an enquiry of the association between the nation and the Internet is woven. I argue here that there are 3 dissonances that occur within the relationship between the Malaysian nation and the Internet. The first of these arises from the tension between the premises underlying techno-utopianism and pro-Malay affirmative action. The second is born of the discordance between the “guaranteed” freedom from online censorship and the absolute punitive powers of the state. The third lies in the contradiction between the Malaysian nation, as practiced through graduated sovereignty and its pro-Bumiputera affirmative action. Together, these three comprise the inflections that the Internet has on Malaysia. Further, I contend that aside from adding to the number of ways in which the nation is understood and experienced, these inflections also have the potential to disrupt how the nation is lived. By lived I mean to denote the realisation of the nation that occurs in and through everyday life.
Resumo:
The design and implementation of a high-power (2 MW peak) vector control drive is described. The inverter switching frequency is low, resulting in high-harmonic-content current waveforms. A block diagram of the physical system is given, and each component is described in some detail. The problem of commanded slip noise sensitivity, inherent in high-power vector control drives, is discussed, and a solution is proposed. Results are given which demonstrate the successful functioning of the system
Resumo:
Several sets of changes have been made to motorcycle licensing in Queensland since 2007, with the aim of improving the safety of novice riders. These include a requirement that a motorcycle learner licence applicant must have held a provisional or open car licence for 12 months, and imposing a 3 year limit for learner licence renewal. Additionally, a requirement to hold an RE (250 cc limited) class licence for a period of 12 months prior to progressing to an R class licence was introduced for Q-RIDE. This paper presents analyses of licensing transaction data that examine the effects of the licensing changes on the duration that the learner licence was held, factors affecting this duration and the extent to which the demographic characteristics of learner licence holders changed. The likely safety implications of the observed changes are discussed.
Resumo:
Red light cameras (RLCs) have been used in a number of US cities to yield a demonstrable reduction in red light violations; however, evaluating their impact on safety (crashes) has been relatively more difficult. Accurately estimating the safety impacts of RLCs is challenging for several reasons. First, many safety related factors are uncontrolled and/or confounded during the periods of observation. Second, “spillover” effects caused by drivers reacting to non-RLC equipped intersections and approaches can make the selection of comparison sites difficult. Third, sites selected for RLC installation may not be selected randomly, and as a result may suffer from the regression to the mean bias. Finally, crash severity and resulting costs need to be considered in order to fully understand the safety impacts of RLCs. Recognizing these challenges, a study was conducted to estimate the safety impacts of RLCs on traffic crashes at signalized intersections in the cities of Phoenix and Scottsdale, Arizona. Twenty-four RLC equipped intersections in both cities are examined in detail and conclusions are drawn. Four different evaluation methodologies were employed to cope with the technical challenges described in this paper and to assess the sensitivity of results based on analytical assumptions. The evaluation results indicated that both Phoenix and Scottsdale are operating cost-effective installations of RLCs: however, the variability in RLC effectiveness within jurisdictions is larger in Phoenix. Consistent with findings in other regions, angle and left-turn crashes are reduced in general, while rear-end crashes tend to increase as a result of RLCs.
Resumo:
In the global knowledge economy, knowledge-intensive industries and knowledge workers are extensively seen as the primary factors to improve the welfare and competitiveness of cities. To attract and retain such industries and workers, cities produce knowledge-based urban development strategies, and therefore such strategising has become an important development mechanism for cities and their economies. The paper discusses the critical connections between knowledge city foundations and integrated knowledge-based urban development mechanisms in both the local and regional level. In particular, the paper investigates Brisbane’s knowledge-based urban development strategies that support gentrification, attraction, and retention of investment and talent. Furthermore, the paper develops a knowledge-based urban development assessment framework to provide a clearer understanding of the local and regional policy frameworks, and relevant applications of Brisbane’s knowledge-based urban development experience, in becoming a prosperous knowledge city. The paper, with its knowledge-based urban development assessment framework, scrutinises Brisbane’s four development domains in detail: economy; society; institutional; built and natural environments. As part of the discussion of the case study findings, the paper describes the global orientation of Brisbane within the frame of regional and local level knowledge-based urban development strategies performing well. Although several good practices from Brisbane have already been internationally acknowledged, the research reveals that Brisbane is still in the early stages of its knowledge-based urban development implementation. Consequently, the development of a monitoring system for all knowledge-based urban development at all levels is highly crucial in accurately measuring the success and failure of specific knowledge-based urban development policies, and Brisbane’s progress towards a knowledge city transformation.
Resumo:
This paper outlines a method of constructing narratives about an individual’s self-efficacy. Self-efficacy is defined as “people’s judgments of their capabilities to organise and execute courses of action required to attain designated types of performances” (Bandura, 1986, p. 391), and as such represents a useful construct for thinking about personal agency. Social cognitive theory provides the theoretical framework for understanding the sources of self-efficacy, that is, the elements that contribute to a sense of self-efficacy. The narrative approach adopted offers an alternative to traditional, positivist psychology, characterised by a preoccupation with measuring psychological constructs (like self-efficacy) by means of questionnaires and scales. It is argued that these instruments yield scores which are somewhat removed from the lived experience of the person—respondent or subject—associated with the score. The method involves a cyclical and iterative process using qualitative interviews to collect data from participants – four mature aged university students. The method builds on a three-interview procedure designed for life history research (Dolbeare & Schuman, cited in Seidman, 1998). This is achieved by introducing reflective homework tasks, as well as written data generated by research participants, as they are guided in reflecting on those experiences (including behaviours, cognitions and emotions) that constitute a sense of self-efficacy, in narrative and by narrative. The method illustrates how narrative analysis is used “to produce stories as the outcome of the research” (Polkinghorne, 1995, p.15), with detail and depth contributing to an appreciation of the ‘lived experience’ of the participants. The method is highly collaborative, with narratives co-constructed by researcher and research participants. The research outcomes suggest an enhanced understanding of self-efficacy contributes to motivation, application of effort and persistence in overcoming difficulties. The paper concludes with an evaluation of the research process by the students who participated in the author’s doctoral study.
Resumo:
Purpose: To compare subjective blur limits for cylinder and defocus. ---------- Method: Blur was induced with a deformable, adaptive-optics mirror when either the subjects’ own astigmatisms were corrected or when both astigmatisms and higher-order aberrations were corrected. Subjects were cyclopleged and had 5 mm artificial pupils. Black letter targets (0.1, 0.35 and 0.6 logMAR) were presented on white backgrounds. Results: For ten subjects, blur limits were approximately 50% greater for cylinder than for defocus (in diopters). While there were considerable effects of axis for individuals, overall this was not strong, with the 0° (or 180°) axis having about 20% greater limits than oblique axes. In a second experiment with text (equivalent in angle to N10 print at 40 cm distance), cylinder blur limits for 6 subjects were approximately 30% greater than those for defocus; this percentage was slightly smaller than for the three letters. Blur limits of the text were intermediate between those of 0.35 logMAR and 0.6 logMAR letters. Extensive blur limit measurements for one subject with single letters did not show expected interactions between target detail orientation and cylinder axis. ---------- Conclusion: Subjective blur limits for cylinder are 30%-50% greater than those for defocus, with the overall influence of cylinder axis being 20%.
Resumo:
Because of the greenhouse gas emissions implications of the market dominating electric hot water systems, governments in Australia have implemented policies and programs to encourage the uptake of solar water heaters (SWHs) in the residential market as part of climate change adaptation and mitigation strategies. The cost-benefit analysis that usually accompanies all government policy and program design could be simplistically reduced to the ratio of expected greenhouse gas reductions of SWH to the cost of a SWH. The national Register of Solar Water Heaters specifies how many renewable energy certificates (RECs) are allocated to complying SWHs according to their expected performance, and hence greenhouse gas reductions, in different climates. Neither REC allocations nor rebates are tied to actual performance of systems. This paper examines the performance of instantaneous gas-boosted solar water heaters installed in new residences in a housing estate in south-east Queensland in the period 2007 – 2010. The evidence indicates systemic failures in installation practices, resulting in zero solar performance or dramatic underperformance (estimated average 43% solar contribution). The paper will detail the faults identified, and how these faults were eventually diagnosed and corrected. The impacts of these system failures on end-use consumers are discussed before concluding with a brief overview of areas where further research is required in order to more fully understand whole of supply chain implications.
Resumo:
The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.
Resumo:
Current guidelines on clear zone selection and roadside hazard management adopt the US approach based on the likelihood of roadside encroachment by drivers. This approach is based on the available research conducted in the 1960s and 70s. Over time, questions have been raised regarding the robustness and applicability of this research in Australasia in 2010 and in the Safe System context. This paper presents a review of the fundamental research relating to selection of clear zones. Results of extensive rural highway statistical data modelling suggest that a significant proportion of run-off-road to the left casualty crashes occurs in clear zones exceeding 13 m. They also show that the risk of run-off-road to the left casualty crashes was 21% lower where clear zones exceeded 8 m when compared with clear zones in the 4 – 8 m range. The paper discusses a possible approach to selection of clear zones based on managing crash outcomes, rather than on the likelihood of roadside encroachment which is the basis for the current practice. It is expected that this approach would encourage selection of clear zones wider than 8 m when the combination of other road features suggests higher than average casualty crash risk.
Resumo:
Currently in Australia, there are no decision support tools for traffic and transport engineers to assess the crash risk potential of proposed road projects at design level. A selection of equivalent tools already exists for traffic performance assessment, e.g. aaSIDRA or VISSIM. The Urban Crash Risk Assessment Tool (UCRAT) was developed for VicRoads by ARRB Group to promote methodical identification of future crash risks arising from proposed road infrastructure, where safety cannot be evaluated based on past crash history. The tool will assist practitioners with key design decisions to arrive at the safest and the most cost -optimal design options. This paper details the development and application of UCRAT software. This professional tool may be used to calculate an expected mean number of casualty crashes for an intersection, a road link or defined road network consisting of a number of such elements. The mean number of crashes provides a measure of risk associated with the proposed functional design and allows evaluation of alternative options. The tool is based on historical data for existing road infrastructure in metropolitan Melbourne and takes into account the influence of key design features, traffic volumes, road function and the speed environment. Crash prediction modelling and risk assessment approaches were combined to develop its unique algorithms. The tool has application in such projects as road access proposals associated with land use developments, public transport integration projects and new road corridor upgrade proposals.
Resumo:
Visualisation provides a method to efficiently convey and understand the complex nature and processes of groundwater systems. This technique has been applied to the Lockyer Valley to aid in comprehending the current condition of the system. The Lockyer Valley in southeast Queensland hosts intensive irrigated agriculture sourcing groundwater from alluvial aquifers. The valley is around 3000 km2 in area and the alluvial deposits are typically 1-3 km wide and to 20-35 m deep in the main channels, reducing in size in subcatchments. The configuration of the alluvium is of a series of elongate “fingers”. In this roughly circular valley recharge to the alluvial aquifers is largely from seasonal storm events, on the surrounding ranges. The ranges are overlain by basaltic aquifers of Tertiary age, which overall are quite transmissive. Both runoff from these ranges and infiltration into the basalts provided ephemeral flow to the streams of the valley. Throughout the valley there are over 5,000 bores extracting alluvial groundwater, plus lesser numbers extracting from underlying sandstone bedrock. Although there are approximately 2500 monitoring bores, the only regularly monitored area is the formally declared management zone in the lower one third. This zone has a calibrated Modflow model (Durick and Bleakly, 2000); a broader valley Modflow model was developed in 2002 (KBR), but did not have extensive extraction data for detailed calibration. Another Modflow model focused on a central area river confluence (Wilson, 2005) with some local production data and pumping test results. A recent subcatchment simulation model incorporates a network of bores with short-period automated hydrographic measurements (Dvoracek and Cox, 2008). The above simulation models were all based on conceptual hydrogeological models of differing scale and detail.
Resumo:
Mary Kalantzis and Bill Cope write in the foreword: “The Multiliteracies Classroom demonstrates in convincing detail how powerful learning can be achieved. Along the way, the book seamlessly weaves cutting-edge theoretical ideas into the fabric of its narrative. In one moment, we hear the lilt of the accents of the children’s discussions. In another, this is connected to the theoretical intricacies of ‘discourse’, ‘heteroglossia’, ‘multimodality’, or ‘dialogic spaces’. We witness the triumphs of a teacher who, in Mills’ words, ‘did not regard literacy as an independent variable. Rather, she regarded it as inseparable from social practices, contextualized in certain political, economic, historic and ecological contexts. Kathy Mills has produced a masterpiece of qualitative research.”
Resumo:
In this paper we present a novel distributed coding protocol for multi-user cooperative networks. The proposed distributed coding protocol exploits the existing orthogonal space-time block codes to achieve higher diversity gain by repeating the code across time and space (available relay nodes). The achievable diversity gain depends on the number of relay nodes that can fully decode the signal from the source. These relay nodes then form space-time codes to cooperatively relay to the destination using number of time slots. However, the improved diversity gain is archived at the expense of the transmission rate. The design principles of the proposed space-time distributed code and the issues related to transmission rate and diversity trade off is discussed in detail. We show that the proposed distributed space-time coding protocol out performs existing distributed codes with a variable transmission rate.
Resumo:
Governments around the world are facing the challenge of responding to increased expectations by their customers with regard to public service delivery. Citizens, for example, expect governments to provide better and more efficient electronic services on the Web in an integrated way. Online portals have become the approach of choice in online service delivery to meet these requirements and become more customer-focussed. This study describes and analyses existing variants of online service delivery models based upon an empirical study and provides valuable insights for researchers and practitioners in government. For this study, we have conducted interviews with senior management representatives from five international governments. Based on our findings, we distinguish three different classes of service delivery models. We describe and characterise each of these models in detail and provide an in-depth discussion of the strengths and weaknesses of these approaches.