239 resultados para Uncontrolled lighting
Resumo:
Actions Towards Sustainable Outcomes Environmental Issues/Principal Impacts The increasing urbanisation of cities brings with it several detrimental consequences, such as: • Significant energy use for heating and cooling many more buildings has led to urban heat islands and increased greenhouse gas emissions. • Increased amount of hard surfaces, which not only contributes to higher temperatures in cities, but also to increased stormwater runoff. • Degraded air quality and noise. • Health and general well-being of people is frequently compromised, by inadequate indoor air quality. • Reduced urban biodiversity. Basic Strategies In many design situations, boundaries and constraints limit the application of cutting EDGe actions. In these circumstances, designers should at least consider the following: • Living walls are an emerging technology, and many Australian examples function more as internal feature walls. However,as understanding of the benefits and construction of living walls develops this technology could be part of an exterior facade that enhances a building’s thermal performance. • Living walls should be designed to function with an irrigation system using non-potable water. Cutting EDGe Strategies • Living walls can be part of a design strategy that effectively improves the thermal performance of a building, thereby contributing to lower energy use and greenhouse gas emissions. • Including living walls in the initial stages of design would provide greater flexibility to the design, especially of the facade, structural supports, mechanical ventilation and watering systems, thus lowering costs. • Designing a building with an early understanding of living walls can greatly reduce maintenance costs. • Including plant species and planting media that would be able to remove air impurities could contribute to improved indoor air quality, workplace productivity and well-being. Synergies and References • Living walls are a key research topic at the Centre for Subtropical Design, Queensland University of Technology: http://www.subtropicaldesign.bee.qut.edu.au • BEDP Environment Design Guide: DES 53: Roof and Facade Gardens • BEDP Environment Design Guide: GEN 4: Positive Development – Designing for Net Positive Impacts (see green scaffolding and green space frame walls). • Green Roofs Australia: www.greenroofs.wordpress.com • Green Roofs for Healthy Cities USA: www.greenroofs.org
Resumo:
As Brisbane grows, it is rapidly becoming akin to any other city in the world with its typical stark grey concrete buildings rather than being characterized by its subtropical element of abundant green vegetation. Living Walls can play a vital role in restoring the loss of this distinct local element of a subtropical city. This paper will start by giving an overview of the traditional methods of greening subtropical cities with the use of urban parks and street trees. Then, by examining a recent heat imaging map of Brisbane, the effect of green cover with the built environment will be shown. With this information from a macro level, this paper will proceed to examine a typical urban block within the Central Business District (CBD) to demonstrate urban densification in relation to greenery in the city. Then, this paper will introduce the new technology where Living Walls have the untapped potential of effectively greening a city where land is scarce and given over to high density development. Living Walls incorporated into building design does not only enhance the subtropical lifestyle that is being lost in modern cities but is also an effective means for addressing climate change. This paper will serve as a preliminary investigation into the effects of incorporating Living Walls into cities. By growing a Living Wall onto buildings, we can be part of an effective design solution for countering global warming and at the same time, Living Walls can return local character to subtropical cities, thereby greening the city as well.
Resumo:
Tested D. J. Kavanagh's (1983) depression model's explanation of response to cognitive-behavioral treatment among 19 20–60 yr old Ss who received treatment and 24 age-matched Ss who were assigned to a waiting list. Measures included the Beck Depression Inventory and self-efficacy (SE) and self-monitoring scales. Rises in SE and self-monitored performance of targeted skills were closely associated with the improved depression scores of treated Ss. Improvements in the depression of waiting list Ss occurred through random, uncontrolled events rather than via a systematic increase in specific skills targeted in treatment. SE regarding assertion also predicted depression scores over a 12-wk follow-up.
Synthesis of 4-arm star poly(L-Lactide) oligomers using an in situ-generated calcium-based initiator
Resumo:
Using an in situ-generated calcium-based initiating species derived from pentaerythritol, the bulk synthesis of well-defined 4-arm star poly(L-lactide) oligomers has been studied in detail. The substitution of the traditional initiator, stannous octoate with calcium hydride allowed the synthesis of oligomers that had both low PDIs and a comparable number of polymeric arms (3.7 – 3.9) to oligomers of similar molecular weight. Investigations into the degree of control observed during the course of the polymerization found that the insolubility of pentaerythritol in molten L-lactide resulted in an uncontrolled polymerization only when the feed mole ratio of L-lactide to pentaerythritol was 13. At feed ratios of 40 and greater, a pseudo-living polymerization was observed. As part of this study, in situ FT-Raman spectroscopy was demonstrated to be a suitable method to monitor the kinetics of the ring-opening polymerization (ROP) of lactide. The advantages of using this technique rather than FT-IR-ATR and 1H NMR for monitoring L-lactide consumption during polymerization are discussed.
Resumo:
This paper describes the development and preliminary experimental evaluation of a visionbased docking system to allow an Autonomous Underwater Vehicle (AUV) to identify and attach itself to a set of uniquely identifiable targets. These targets, docking poles, are detected using Haar rectangular features and rotation of integral images. A non-holonomic controller allows the Starbug AUV to orient itself with respect to the target whilst maintaining visual contact during the manoeuvre. Experimental results show the proposed vision system is capable of robustly identifying a pair of docking poles simultaneously in a variety of orientations and lighting conditions. Experiments in an outdoor pool show that this vision system enables the AUV to dock autonomously from a distance of up to 4m with relatively low visibility.
Resumo:
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.
Resumo:
The social tags in web 2.0 are becoming another important information source to profile users' interests and preferences for making personalized recommendations. However, the uncontrolled vocabulary causes a lot of problems to profile users accurately, such as ambiguity, synonyms, misspelling, low information sharing etc. To solve these problems, this paper proposes to use popular tags to represent the actual topics of tags, the content of items, and also the topic interests of users. A novel user profiling approach is proposed in this paper that first identifies popular tags, then represents users’ original tags using the popular tags, finally generates users’ topic interests based on the popular tags. A collaborative filtering based recommender system has been developed that builds the user profile using the proposed approach. The user profile generated using the proposed approach can represent user interests more accurately and the information sharing among users in the profile is also increased. Consequently the neighborhood of a user, which plays a crucial role in collaborative filtering based recommenders, can be much more accurately determined. The experimental results based on real world data obtained from Amazon.com show that the proposed approach outperforms other approaches.
Resumo:
Objectives: As the population ages, more people will be wearing presbyopic vision corrections when driving. However, little is known about the impact of these vision corrections on driving performance. This study aimed to determine the subjective driving difficulties experienced when wearing a range of common presbyopic contact lens and spectacle corrections.----- Methods: A questionnaire was developed and piloted that included a series of items regarding difficulties experienced while driving under daytime and night-time conditions (rated on five-point and seven-point Likert scales). Participants included 255 presbyopic patients recruited through local optometry practices. Participants were categorized into five age-matched groups; including those wearing no vision correction for driving (n = 50), bifocal spectacles (n = 54), progressive spectacles (n = 50), monovision contact lenses (n = 53), and multifocal contact lenses (n = 48).----- Results: Overall, ratings of satisfaction during daytime driving were relatively high for all correction types. However, multifocal contact lens wearers were significantly less satisfied with aspects of their vision during night-time than daytime driving, particularly regarding disturbances from glare and haloes. Progressive spectacle lens wearers noticed more distortion of peripheral vision, whereas bifocal spectacle wearers reported more difficulties with tasks requiring changes of focus and those who wore no optical correction for driving reported problems with intermediate and near tasks. Overall, satisfaction was significantly higher for progressive spectacles than bifocal spectacles for driving.----- Conclusions: Subjective visual experiences of different presbyopic vision corrections when driving vary depending on the vision tasks and lighting level. Eye-care practitioners should be aware of the driving-related difficulties experienced with each vision correction type and the need to select corrective types that match the driving needs of their patients.
Resumo:
The care of low-vision patients is termed vision rehabilitation, and optometrists have an essential role to play in the provision of vision rehabilitation services. Ideally, if patients stay with one optometrist or practice, their low-vision care becomes part of a continuum of eye care, from the time when they had normal vision. If progressive vision loss occurs, the role of the optometrist changes from primary eye care only to one of monitoring vision loss and gradually introducing low-vision care, especially magnification and advice on lighting and contrast, in conjunction with other vision rehabilitation professionals.
Resumo:
Purpose: Computer vision has been widely used in the inspection of electronic components. This paper proposes a computer vision system for the automatic detection, localisation, and segmentation of solder joints on Printed Circuit Boards (PCBs) under different illumination conditions. Design/methodology/approach: An illumination normalization approach is applied to an image, which can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image the same as in the corresponding image under normal lighting conditions. Consequently special lighting and instrumental setup can be reduced in order to detect solder joints. These normalised images are insensitive to illumination variations and are used for the subsequent solder joint detection stages. In the segmentation approach, the PCB image is transformed from an RGB color space to a YIQ color space for the effective detection of solder joints from the background. Findings: The segmentation results show that the proposed approach improves the performance significantly for images under varying illumination conditions. Research limitations/implications: This paper proposes a front-end system for the automatic detection, localisation, and segmentation of solder joint defects. Further research is required to complete the full system including the classification of solder joint defects. Practical implications: The methodology presented in this paper can be an effective method to reduce cost and improve quality in production of PCBs in the manufacturing industry. Originality/value: This research proposes the automatic location, identification and segmentation of solder joints under different illumination conditions.
Resumo:
The international focus on embracing daylighting for energy efficient lighting purposes and the corporate sector’s indulgence in the perception of workplace and work practice “transparency” has spurned an increase in highly glazed commercial buildings. This in turn has renewed issues of visual comfort and daylight-derived glare for occupants. In order to ascertain evidence, or predict risk, of these events; appraisals of these complex visual environments require detailed information on the luminances present in an occupant’s field of view. Conventional luminance meters are an expensive and time consuming method of achieving these results. To create a luminance map of an occupant’s visual field using such a meter requires too many individual measurements to be a practical measurement technique. The application of digital cameras as luminance measurement devices has solved this problem. With high dynamic range imaging, a single digital image can be created to provide luminances on a pixel-by-pixel level within the broad field of view afforded by a fish-eye lens: virtually replicating an occupant’s visual field and providing rapid yet detailed luminance information for the entire scene. With proper calibration, relatively inexpensive digital cameras can be successfully applied to the task of luminance measurements, placing them in the realm of tools that any lighting professional should own. This paper discusses how a digital camera can become a luminance measurement device and then presents an analysis of results obtained from post occupancy measurements from building assessments conducted by the Mobile Architecture Built Environment Laboratory (MABEL) project. This discussion leads to the important realisation that the placement of such tools in the hands of lighting professionals internationally will provide new opportunities for the lighting community in terms of research on critical issues in lighting such as daylight glare and visual quality and comfort.
Resumo:
Many surveillance applications (object tracking, abandoned object detection) rely on detecting changes in a scene. Foreground segmentation is an effective way to extract the foreground from the scene, but these techniques cannot discriminate between objects that have temporarily stopped and those that are moving. We propose a series of modifications to an existing foreground segmentation system\cite{Butler2003} so that the foreground is further segmented into two or more layers. This yields an active layer of objects currently in motion and a passive layer of objects that have temporarily ceased motion which can itself be decomposed into multiple static layers. We also propose a variable threshold to cope with variable illumination, a feedback mechanism that allows an external process (i.e. surveillance system) to alter the motion detectors state, and a lighting compensation process and a shadow detector to reduce errors caused by lighting inconsistencies. The technique is demonstrated using outdoor surveillance footage, and is shown to be able to effectively deal with real world lighting conditions and overlapping objects.
Resumo:
Surveillance and tracking systems typically use a single colour modality for their input. These systems work well in controlled conditions but often fail with low lighting, shadowing, smoke, dust, unstable backgrounds or when the foreground object is of similar colouring to the background. With advances in technology and manufacturing techniques, sensors that allow us to see into the thermal infrared spectrum are becoming more affordable. By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using visible light only for surveillance and tracking. Thermal images are not affected by lighting or shadowing and are not overtly affected by smoke, dust or unstable backgrounds. We propose and evaluate three approaches for fusing visual and thermal images for person tracking. We also propose a modified condensation filter to track and aid in the fusion of the modalities. We compare the proposed fusion schemes with using the visual and thermal domains on their own, and demonstrate that significant improvements can be achieved by using multiple modalities.
Resumo:
Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult. ----- By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system. ----- We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.
Resumo:
Object tracking systems require accurate segmentation of the objects from the background for effective tracking. Motion segmentation or optical flow can be used to segment incoming images. Whilst optical flow allows multiple moving targets to be separated based on their individual velocities, optical flow techniques are prone to errors caused by changing lighting and occlusions, both common in a surveillance environment. Motion segmentation techniques are more robust to fluctuating lighting and occlusions, but don't provide information on the direction of the motion. In this paper we propose a combined motion segmentation/optical flow algorithm for use in object tracking. The proposed algorithm uses the motion segmentation results to inform the optical flow calculations and ensure that optical flow is only calculated in regions of motion, and improve the performance of the optical flow around the edge of moving objects. Optical flow is calculated at pixel resolution and tracking of flow vectors is employed to improve performance and detect discontinuities, which can indicate the location of overlaps between objects. The algorithm is evaluated by attempting to extract a moving target within the flow images, given expected horizontal and vertical movement (i.e. the algorithms intended use for object tracking). Results show that the proposed algorithm outperforms other widely used optical flow techniques for this surveillance application.