82 resultados para Étape de préparation au changement
em Queensland University of Technology - ePrints Archive
Resumo:
Bi-2212 thick film on silver tapes are seen as a simple and low cost alternative to high temperature superconducting wires produced by the Powder In Thbe (PIT) technique, particularly in react and wind applications. A rig for the continuous production of Bi-2212 tapes for use in react and wind component manufacture has been developed and commissioned. The rig consists of several sections, each fully automatic, for task specific duties in the production of HTS tape. The major sections are: tape coating, sintering and annealing. High temperature superconducting tapes with engineering critical current densities of 10 kA/cm2 (77 K, self field), and lengths of up to 100 m have been produced using the rig. Properties of the finished tape are discussed and results are presented for current density versus bend radius and applied strain. Depending on tape content and thickness, irreversible strain tirrm varies between 0.04 and 0.1 %. Cyclic bending tests when applied strain does not exceed Eirrm showed negligible reduction in J c along the length of the tape.
Resumo:
Bi-2212 tapes were fabricated using a powder-in-tube method and their superconducting properties were measured as a function of heat treatment. The tapes were heated to temperature, T1 (884-915 °C), and kept at that temperature for 20 min to induce partial (incongruent) melting. The samples were cooled to T2 with a ramp rate of 120 °C h-1 and then slowly cooled to T3 with a cooling rate, R2, and from T3 to T4 with a cooling rate, R3. The tapes were kept at the temperature T4 for P1 hours and then cooled to room temperature. Both R1 and R2 were chosen between 2 and 8 °C h-1. It was found that the structure and Jc of the tapes depend on the sintering conditions, i.e. T1-4, R1-3 and P1. The highest Jc of 5800 Å cm-2 was obtained at 77 K in a self-field with heat treatment where T1 = 894 and 899 °C, R1 = R2 = 5 °C h-1 and P1 = 6 h were employed. When 0.7% of bend strain, which is equivalent to a bend radius of 5 mm, was applied to the tape, 80% of the initial Jc was sustained.
Resumo:
The Queensland Property Law Review is currently reviewing seller disclosure laws in Queensland. The review will consider if the desire to provide consumers of real estate with valuable timely information about a property offered for sale can be effectively delivered with a minimum of red tape. This article examines the principles proposed by the first discussion paper on seller disclosure and their likely effect in practice.
Resumo:
The present study explored the effects of the double counter twisted tapes on heat transfer and fluid friction characteristics in a heat exchanger tube. The double counter twisted tapes were used as counter-swirl flow generators in the test section. The experiments were performed with double counter twisted tapes of four different twist ratios (y = 1.95, 3.85, 5.92 and 7.75) using air as the testing fluid in a circular tube turbulent flow regime where the Reynolds number was varied from 6950 to 50,050. The experimental results demonstrated that the Nusselt number, friction factor and thermal enhancement efficiency were increased with decreasing twist ratio. The results also revealed that the heat transfer rate in the tube fitted with double counter twisted tape was significantly increased with corresponding increase in pressure drop. In the range of the present work, heat transfer rate and friction factor were obtained to be around 60 to 240% and 91 to 286% higher than those of the plain tube values, respectively. The maximum thermal enhancement efficiency of 1.34 was achieved by the use of double counter twisted tapes at constant blower power. In addition, the empirical correlations for the Nusselt number, friction factor and thermal enhancement efficiency were also developed, based on the experimental data.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
Recent initiatives around the world have highlighted the potential for information and communications technology (ICT) to foster better service delivery for businesses. Likewise, ICT has also been applied to government services and is seen to result in improved service delivery, improved citizen participation in government, and enhanced cooperation across government departments and between government departments. The Council of Australian Governments (COAG) (2006) identified local government development assessment (DA) arrangements as a ‘hot spot’ needing specific attention, as the inconsistent policies and regulations between councils impeded regional economic activity. COAG (2006) specifically suggested that trials of various ICT mechanisms be initiated which may well be able to improve DA processes for local government. While the authors have explored various regulatory mechanisms to improve harmonisation elsewhere (Brown and Furneaux 2007), the possibility of ICT being able to enhance consistency across governments is a novel notion from a public policy perspective. Consequently, this paper will explore the utility of ICT initiatives to improve harmonisation of DA across local governments. This paper examines as a case study the recent attempt to streamline Development Assessment (DA) in local governments in South East Queensland. This initiative was funded by the Regulation Reduction Incentive Fund (RRIF), and championed by the South East Queensland (SEQ) Council of Mayors. The Regulation Reduction Incentive Fund (RRIF) program was created by the Australian government with the aim to provide incentives to local councils to reduce red tape for small and medium sized businesses. The funding for the program was facilitated through a competitive merit-based grants process targeted at Local Government Authorities. Grants were awarded to projects which targeted specific areas identified for reform (AusIndustry, 2007), in SEQ this focused around improving DA processes and creating transparency in environmental health policies, regulation and compliance. An important key factor to note with this case study is that it is unusual for an eGovernment initiative. Typically individual government departments undertake eGovernment projects in order to improve their internal performance. The RRIF case study examines the implementation of an eGovernment initiative across 21 autonomous local councils in South East Queensland. In order to move ahead, agreement needed to be reached between councils at the highest level. Having reviewed the concepts of eGovernment and eGovernance, the literature review is undertaken to identify the typical cost and benefits, barriers and enablers of ICT projects in government. The specific case of the RRIF project is then examined to determine if similar costs and benefits, barriers and enablers could be found in the RRIF project. The outcomes of the project, particularly in reducing red tape by increasing harmonisation between councils are explored.
Resumo:
The need to “reduce red tape” and regulatory inconsistencies is a desirable outcome (OECD 1997) for developed countries. The costs normally associated with regulatory regimes are compliance costs and direct charges. Geiger and Hoffman (1998) have noted that the extent of regulation in an industry tends to be negatively associated with firm performance. Typically, approaches to estimation of the cost of regulations examine direct costs, such as fees and charges, together with indirect costs, such as compliance costs. However, in a fragmented system, such as Australia, costs can also be incurred due to procedural delays, either by government, or by industry having to adapt documentation for different spheres of government; lack of predictable outcomes, with variations occurring between spheres of government and sometimes within the same government agency; and lost business opportunities, with delays and red tape preventing realisation of business opportunities (OECD 1997). In this submission these costs are termed adaptation costs. The adaptation costs of complying with variations in regulations between the states has been estimated by the Building Product Innovation Council (2003) as being up to $600 million per annum for building product manufacturers alone. Productivity gains from increased harmonisation of the regulatory system have been estimated in the hundreds of millions of dollars (ABCB 2003). This argument is supported by international research which found that increasing the harmonisation of legislation in a federal system of government reduces what we have termed adaptation costs (OECD 2001). Research reports into the construction industry in Australia have likewise argued that improved consistency in the regulatory environment could lead to improvements in innovation (PriceWaterhouseCoopers 2002), and that research into this area should be given high priority (Hampson & Brandon 2004). The opinion of industry in Australia has consistently held that the current regulatory environment inhibits innovation (Manley 2004). As a first step in advancing improvements to the current situation, a summary of the current costs experienced by industry needs to be articulated. This executive summary seeks to outline these costs in the hope that the Productivity Commission would be able to identify the best tools to quantify the actual costs to industry.
Resumo:
Tested the hypothesis that level of performance and persistence in completing tasks is affected by mood. 44 female and 41 male college students received tape-recorded instructions to recall vividly happy or sad experiences or to imagine a neutral situation. Results for the primary dependent variables on which a mood difference was predicted were analyzed with a multivariate analysis of variance (MANOVA). After the induction happy Ss persisted longer at an anagrams task and solved more anagrams than sad Ss. Women were also faster at reaching solutions when happy than sad. Results support the hypothesis that positive moods promote persistence and ultimate success, but they raise questions about the role of self-efficacy and the sources of gender differences.
Resumo:
Aim: This paper is a report of a study to explore the phenomenon of resilience in the lives of adult patients of mental health services who have experienced mental illness. ---------- Background: Mental illness is a major health concern worldwide, and the majority experiencing it will continue to battle with relapses throughout their lives. However, in many instances people go on to overcome their illness to lead productive and socially engaged lives. Contemporary mental health nursing practice primarily focuses on symptom reduction, and working with resilience has not generally been a consideration. ---------- Method: A descriptive phenomenological study was carried out in 2006. One participant was recruited through advertisements in community newspapers and newsletters and the others using the snowballing method. Information was gathered through in-depth individual interviews which were tape-recorded and subsequently transcribed. Colaizzi's original seven-step approach was used for data analysis, with the inclusion of two additional steps. ---------- Findings: The following themes were identified: Universality, Acceptance, Naming and knowing, Faith, Hope, Being the fool and Striking a balance, Having meaning and meaningful relationships, and 'Just doing it'. The conceptualization identified as encapsulating the themes was 'Viewing life from the ridge with eyes wide open', which involved knowing the risks and dangers ahead and making a decision for life amid ever-present hardships. ---------- Conclusion: Knowledge about resilience should be included in the theoretical and practical education of nursing students and experienced nurses. Early intervention, based on resilience factors identified through screening processes, is needed for people with mental illness.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Background: This study investigated the effects of experimentally induced visual impairment, headlamp glare and clothing on pedestrian visibility. Methods: 28 young adults (M=27.6±4.7 yrs) drove around a closed road circuit at night while pedestrians walked in place at the roadside. Pedestrians wore either black clothing, black clothing with a rectangular vest consisting of 1325 cm2 of retroreflective tape, or the same amount of tape positioned on the extremities in a configuration that conveyed biological motion (“biomotion”). Visual impairment was induced by goggles containing either blurring lenses, simulated cataracts, or clear lenses; visual acuity for the cataract and blurred lens conditions was matched. Drivers pressed a response pad when they first recognized that a pedestrian was present. Sixteen participants drove around the circuit in the presence of headlamp glare while twelve drove without glare. Results: Visual impairment, headlamp glare and pedestrian clothing all significantly affected drivers’ ability to recognize pedestrians (p<0.05). The simulated cataracts were more disruptive than blur, even though acuity was matched across the two manipulations. Pedestrians were recognized more often and at longer distances when they wore “biomotion” clothing than either the vest or black clothing, even in the presence of visual impairment and glare. Conclusions: Drivers’ ability to see and respond to pedestrians at night is degraded by modest visual impairments even when vision meets driver licensing requirements; glare further exacerbates these effects. Clothing that includes retroreflective tape in a biological motion configuration is relatively robust to visual impairment and glare.