982 resultados para Tape


Relevância:

20.00% 20.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent initiatives around the world have highlighted the potential for information and communications technology (ICT) to foster better service delivery for businesses. Likewise, ICT has also been applied to government services and is seen to result in improved service delivery, improved citizen participation in government, and enhanced cooperation across government departments and between government departments. The Council of Australian Governments (COAG) (2006) identified local government development assessment (DA) arrangements as a ‘hot spot’ needing specific attention, as the inconsistent policies and regulations between councils impeded regional economic activity. COAG (2006) specifically suggested that trials of various ICT mechanisms be initiated which may well be able to improve DA processes for local government. While the authors have explored various regulatory mechanisms to improve harmonisation elsewhere (Brown and Furneaux 2007), the possibility of ICT being able to enhance consistency across governments is a novel notion from a public policy perspective. Consequently, this paper will explore the utility of ICT initiatives to improve harmonisation of DA across local governments. This paper examines as a case study the recent attempt to streamline Development Assessment (DA) in local governments in South East Queensland. This initiative was funded by the Regulation Reduction Incentive Fund (RRIF), and championed by the South East Queensland (SEQ) Council of Mayors. The Regulation Reduction Incentive Fund (RRIF) program was created by the Australian government with the aim to provide incentives to local councils to reduce red tape for small and medium sized businesses. The funding for the program was facilitated through a competitive merit-based grants process targeted at Local Government Authorities. Grants were awarded to projects which targeted specific areas identified for reform (AusIndustry, 2007), in SEQ this focused around improving DA processes and creating transparency in environmental health policies, regulation and compliance. An important key factor to note with this case study is that it is unusual for an eGovernment initiative. Typically individual government departments undertake eGovernment projects in order to improve their internal performance. The RRIF case study examines the implementation of an eGovernment initiative across 21 autonomous local councils in South East Queensland. In order to move ahead, agreement needed to be reached between councils at the highest level. Having reviewed the concepts of eGovernment and eGovernance, the literature review is undertaken to identify the typical cost and benefits, barriers and enablers of ICT projects in government. The specific case of the RRIF project is then examined to determine if similar costs and benefits, barriers and enablers could be found in the RRIF project. The outcomes of the project, particularly in reducing red tape by increasing harmonisation between councils are explored.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need to “reduce red tape” and regulatory inconsistencies is a desirable outcome (OECD 1997) for developed countries. The costs normally associated with regulatory regimes are compliance costs and direct charges. Geiger and Hoffman (1998) have noted that the extent of regulation in an industry tends to be negatively associated with firm performance. Typically, approaches to estimation of the cost of regulations examine direct costs, such as fees and charges, together with indirect costs, such as compliance costs. However, in a fragmented system, such as Australia, costs can also be incurred due to procedural delays, either by government, or by industry having to adapt documentation for different spheres of government; lack of predictable outcomes, with variations occurring between spheres of government and sometimes within the same government agency; and lost business opportunities, with delays and red tape preventing realisation of business opportunities (OECD 1997). In this submission these costs are termed adaptation costs. The adaptation costs of complying with variations in regulations between the states has been estimated by the Building Product Innovation Council (2003) as being up to $600 million per annum for building product manufacturers alone. Productivity gains from increased harmonisation of the regulatory system have been estimated in the hundreds of millions of dollars (ABCB 2003). This argument is supported by international research which found that increasing the harmonisation of legislation in a federal system of government reduces what we have termed adaptation costs (OECD 2001). Research reports into the construction industry in Australia have likewise argued that improved consistency in the regulatory environment could lead to improvements in innovation (PriceWaterhouseCoopers 2002), and that research into this area should be given high priority (Hampson & Brandon 2004). The opinion of industry in Australia has consistently held that the current regulatory environment inhibits innovation (Manley 2004). As a first step in advancing improvements to the current situation, a summary of the current costs experienced by industry needs to be articulated. This executive summary seeks to outline these costs in the hope that the Productivity Commission would be able to identify the best tools to quantify the actual costs to industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tested the hypothesis that level of performance and persistence in completing tasks is affected by mood. 44 female and 41 male college students received tape-recorded instructions to recall vividly happy or sad experiences or to imagine a neutral situation. Results for the primary dependent variables on which a mood difference was predicted were analyzed with a multivariate analysis of variance (MANOVA). After the induction happy Ss persisted longer at an anagrams task and solved more anagrams than sad Ss. Women were also faster at reaching solutions when happy than sad. Results support the hypothesis that positive moods promote persistence and ultimate success, but they raise questions about the role of self-efficacy and the sources of gender differences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: This paper is a report of a study to explore the phenomenon of resilience in the lives of adult patients of mental health services who have experienced mental illness. ---------- Background: Mental illness is a major health concern worldwide, and the majority experiencing it will continue to battle with relapses throughout their lives. However, in many instances people go on to overcome their illness to lead productive and socially engaged lives. Contemporary mental health nursing practice primarily focuses on symptom reduction, and working with resilience has not generally been a consideration. ---------- Method: A descriptive phenomenological study was carried out in 2006. One participant was recruited through advertisements in community newspapers and newsletters and the others using the snowballing method. Information was gathered through in-depth individual interviews which were tape-recorded and subsequently transcribed. Colaizzi's original seven-step approach was used for data analysis, with the inclusion of two additional steps. ---------- Findings: The following themes were identified: Universality, Acceptance, Naming and knowing, Faith, Hope, Being the fool and Striking a balance, Having meaning and meaningful relationships, and 'Just doing it'. The conceptualization identified as encapsulating the themes was 'Viewing life from the ridge with eyes wide open', which involved knowing the risks and dangers ahead and making a decision for life amid ever-present hardships. ---------- Conclusion: Knowledge about resilience should be included in the theoretical and practical education of nursing students and experienced nurses. Early intervention, based on resilience factors identified through screening processes, is needed for people with mental illness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: This study investigated the effects of experimentally induced visual impairment, headlamp glare and clothing on pedestrian visibility. Methods: 28 young adults (M=27.6±4.7 yrs) drove around a closed road circuit at night while pedestrians walked in place at the roadside. Pedestrians wore either black clothing, black clothing with a rectangular vest consisting of 1325 cm2 of retroreflective tape, or the same amount of tape positioned on the extremities in a configuration that conveyed biological motion (“biomotion”). Visual impairment was induced by goggles containing either blurring lenses, simulated cataracts, or clear lenses; visual acuity for the cataract and blurred lens conditions was matched. Drivers pressed a response pad when they first recognized that a pedestrian was present. Sixteen participants drove around the circuit in the presence of headlamp glare while twelve drove without glare. Results: Visual impairment, headlamp glare and pedestrian clothing all significantly affected drivers’ ability to recognize pedestrians (p<0.05). The simulated cataracts were more disruptive than blur, even though acuity was matched across the two manipulations. Pedestrians were recognized more often and at longer distances when they wore “biomotion” clothing than either the vest or black clothing, even in the presence of visual impairment and glare. Conclusions: Drivers’ ability to see and respond to pedestrians at night is degraded by modest visual impairments even when vision meets driver licensing requirements; glare further exacerbates these effects. Clothing that includes retroreflective tape in a biological motion configuration is relatively robust to visual impairment and glare.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims and objectives.  The aim of this study was to gain an understanding of the experiences and perspectives of intensive care nurses caring for critically ill obstetric patients. Background.  Current literature suggests critically ill obstetric patients need specialised, technically appropriate care to meet their specific needs with which many intensive care nurses are unfamiliar. Furthermore, there is little research and evidence to guide the care of this distinct patient group. Design.  This study used a descriptive qualitative design. Methods.  Two focus groups were used to collect data from 10 Australian intensive care units nurses in May 2007. Open-ended questions were used to guide the discussion. Latent content analysis was used to analyse the data set. Each interview lasted no longer than 60 minutes and was recorded using audio tape. The full interviews were transcribed prior to in-depth analysis to identify major themes. Results.  The themes identified from the focus group interviews were competence with knowledge and skills for managing obstetric patients in the intensive care unit, confidence in caring for obstetric patients admitted to the intensive care unit and acceptance of an expanded scope of practice perceived to include fundamental midwifery knowledge and skills. Conclusion.  The expressed lack of confidence and competence in meeting the obstetric and support needs of critically ill obstetric women indicates a clear need for greater assistance and education of intensive care nurses. This in turn may encourage critical care nurses to accept an expanded role of clinical practice in caring for critically ill obstetric patients. Relevance to clinical practice.  Recognition of the issues for nurses in successfully caring for obstetric patients admitted to an adult intensive care setting provides direction for designing education packages, ensuring specific carepaths and guidelines are in place and that support from a multidisciplinary team is available including midwifery staff.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modelling the power systems load is a challenge since the load level and composition varies with time. An accurate load model is important because there is a substantial component of load dynamics in the frequency range relevant to system stability. The composition of loads need to be charaterised because the time constants of composite loads affect the damping contributions of the loads to power system oscillations, and their effects vary with the time of the day, depending on the mix of motors loads. This chapter has two main objectives: 1) describe the load modelling in small signal using on-line measurements; and 2) present a new approach to develop models that reflect the load response to large disturbances. Small signal load characterisation based on on-line measurements allows predicting the composition of load with improved accuracy compared with post-mortem or classical load models. Rather than a generic dynamic model for small signal modelling of the load, an explicit induction motor is used so the performance for larger disturbances can be more reliably inferred. The relation between power and frequency/voltage can be explicitly formulated and the contribution of induction motors extracted. One of the main features of this work is the induction motor component can be associated to nominal powers or equivalent motors

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At first glance, the gallery seems to be empty. Upon entering however, 11:59 reveals itself to be masking tape placed lackadaisically in seemingly geometric forms. Enter the gallery at 11:59 and you would witness the light and shadows correlate with the gestural marks that have been made. Exploring ideas of time, space, gesture, value and mark-making, this work can be interpreted to be overflowing with confidence and/or impotence. It whispers about site and encounters, over-complication and simplicity, and boldness and hesitancy.