410 resultados para Haar transform
Resumo:
This paper evaluates the performance of different text recognition techniques for a mobile robot in an indoor (university campus) environment. We compared four different methods: our own approach using existing text detection methods (Minimally Stable Extremal Regions detector and Stroke Width Transform) combined with a convolutional neural network, two modes of the open source program Tesseract, and the experimental mobile app Google Goggles. The results show that a convolutional neural network combined with the Stroke Width Transform gives the best performance in correctly matched text on images with single characters whereas Google Goggles gives the best performance on images with multiple words. The dataset used for this work is released as well.
Resumo:
In this paper we propose the hybrid use of illuminant invariant and RGB images to perform image classification of urban scenes despite challenging variation in lighting conditions. Coping with lighting change (and the shadows thereby invoked) is a non-negotiable requirement for long term autonomy using vision. One aspect of this is the ability to reliably classify scene components in the presence of marked and often sudden changes in lighting. This is the focus of this paper. Posed with the task of classifying all parts in a scene from a full colour image, we propose that lighting invariant transforms can reduce the variability of the scene, resulting in a more reliable classification. We leverage the ideas of “data transfer” for classification, beginning with full colour images for obtaining candidate scene-level matches using global image descriptors. This is commonly followed by superpixellevel matching with local features. However, we show that if the RGB images are subjected to an illuminant invariant transform before computing the superpixel-level features, classification is significantly more robust to scene illumination effects. The approach is evaluated using three datasets. The first being our own dataset and the second being the KITTI dataset using manually generated ground truth for quantitative analysis. We qualitatively evaluate the method on a third custom dataset over a 750m trajectory.
Resumo:
During the 18th and 19th centuries, prostitution came to be understood as a potentially disruptive element in the management of society. New forms of social control developed that sought to transform the souls of prostitutes to better control their bodies. Institutions for managing prostitutes, such as Magdalen Homes and lock hospitals, were introduced or increased in number throughout the British Empire, North America, and Western Europe. Often these institutions had as their stated objective the physical purification and moral reform of prostitutes, appearing to make a dramatic break with earlier methods of social control that had relied on practices of physical punishment and spatial segregation. Emergent institutions for the social control of prostitutes used a regimen of religious training, hard labor, and medical expertise. The objective of the Magdalen Home was not to punish sin but to absolve it, while the function of the lock hospital was not simply to confine the ill, but to confine the ill to "cure" them. The role of these institutions was not only symbolic, mirroring in some way the operation of earlier forms of social control, but was also practical and transformative. The mass institutionalization of prostitutes that occurred during the 18th and 19th centuries produced and emphasized sexual, class, and gender boundaries, grounded in the broad distinction between "pure" and "impure" women. Because of its association with sin, prostitution before the 18th century had been constructed as a religious problem relating to salvation and penitence. Throughout Western Europe during the Middle Ages, prostitutes, like the medieval leper and the Jew, were subject to restrictions designed to distinguish and isolate them from other members of their communities. The repression of prostitution during the Middle Ages was neither systematic nor highly organized, although it reinforced the image of the prostitute as sinful "other".
Resumo:
Digital innovation is transforming the media and entertainment industries. The professionalization of YouTube’s platform is paradigmatic of that change. The 100 original channel initiative launched in late 2011 was designed to transform YouTube’s brand through production of a high volume of quality premium video content that would more deeply engage its audience base and in the process attract big advertisers. An unanticipated by-product has been the rapid growth of a wave of aspiring next-generation digital media companies from within the YouTube ecosystem. Fuelled by early venture capital some have ambitious goals to become global media corporations in the online video space. A number of larger MCNs (Multi-Channel Networks) - BigFrame, Machinima, Fullscreen, AwesomenessTV, Maker Studios , Revision3 and DanceOn - have attracted interest from media incumbents like Warner Brothers, DreamWorks, Discovery, Bertlesmann, Comcast and AMC, and two larger MCNs Alloy and Break Media have merged. This indicates that a shakeout is underway in these new online supply chains, after rapid initial growth. The higher profile MCNs seek to rapidly develop scale economies in online distribution and facilitate audience growth for their member channels, helping channels optimize monetization, develop sustainable business models and to facilitate producer-collaboration within a growing online community of like-minded content creators. Some MCNs already attract far larger online audiences than any national TV network. The speed with which these developments have occurred is reminiscent of the 1910s, when Hollywood studios first emerged and within only a few years replaced the incumbent film studios as the dominant force within the film industry.
Resumo:
This research provides an assessment tool that assists the selection process of sustainability in detached suburban housing. It investigates the implications of using different design and construction methods including architecturally designed houses, developer housing and prefabricated houses. The study simulates one example of the three types of houses that have been chosen to fulfil a real client brief on a real site on the Sunshine Coast, Queensland Australia. Criteria for sustainability assessment are formulated based on literature reviews, exemplar designs and similar research projects for which the houses can be adequately evaluated. This criterion covers aspects including energy use, materials and thermal performance. The data is collected using computer models and sustainability assessment software to compare and draw conclusions on the success of each house. Our study indicates that architecturally designed housing with prefabricated building techniques are a better alternative to generic developer style housing. Our research provides an insight into the implications of three key elements of sustainability including energy use, materials and thermal performance. Designers, builders, developers and home-buyers are given an insight into some options currently available on the housing market and how the choices made during early design stages can provide a more positive environmental impact.
Resumo:
In studies of germ cell transplantation, measureing tubule diameters and counting cells from different populations using antibodies as markers are very important. Manual measurement of tubule sizes and cell counts is a tedious and sanity grinding work. In this paper, we propose a new boundary weighting based tubule detection method. We first enhance the linear features of the input image and detect the approximate centers of tubules. Next, a boundary weighting transform is applied to the polar transformed image of each tubule region and a circular shortest path is used for the boundary detection. Then, ellipse fitting is carried out for tubule selection and measurement. The algorithm has been tested on a dataset consisting of 20 images, each having about 20 tubules. Experiments show that the detection results of our algorithm are very close to the results obtained manually. © 2013 IEEE.
Resumo:
This book is about understanding the nature and application of reflection in higher education. It provides a theoretical model to guide the implementation of reflective learning and reflective practice across multiple disciplines and international contexts in higher education. The book presents research into the ways in which reflection is both considered and implemented in different ways across different professional disciplines, while maintaining a common purpose to transform and improve learning and/or practice. Readers will find this book innovative and new in three key ways. First, in its holistic theorisation of reflection within the pedagogic field of higher education; Secondly, in conceptualising reflection in different modes to achieve specific purposes in different disciplines; and finally, in providing conceptual guidance for embedding reflective learning and reflective practice in a systematic way across whole programmes, faculties or institutions in higher education. The book considers important contextual factors that influence the teaching of forms and methods of reflection. It provides a functional analysis of multiple modes of reflection, including written, oral, visual, auditory, and embodied forms. Empirical chapters analyse the application of these modes across disciplines and at different stages of a programme. The theoretical model accounts for students’ stage of development in the disciplinary field, along with progressive and cyclical levels of higher order thinking, and learning and professional practice that are expected within different disciplines and professional fields. The book provides: • A conceptual model for the application of reflection across disciplines in a variety of contexts. • Empirical examples of different modes and pedagogic patterns for reflection. • Guidance and support for embedding systemic pedagogical and curriculum change.
Resumo:
Bat researchers currently use a variety of techniques that transform echolocation calls into audible frequencies and allow the spectral content of a signal to be viewed and analyzed. All techniques have limitations and an understanding of how each works and the effect on the signal being analyzed are vital for correct interpretation. The 3 most commonly used techniques for transforming frequencies of a call are heterodyne, frequency division, and time expansion. Three techniques for viewing spectral content of a signal are zero-crossing, Fourier analysis, and instantaneous frequency analysis. It is important for bat researchers to be familiar with the advantages and disadvantages of each technique.
Resumo:
Particulates with specific sizes and characteristics can induce potent immune responses by promoting antigen uptake of appropriate immuno-stimulatory cell types. Magnetite (Fe3O4) nanoparticles have shown many potential bioapplications due to their biocompatibility and special characteristics. Here, superparamagnetic Fe3O4 nanoparticles (SPIONs) with high magnetization value (70emug-1) were stabilized with trisodium citrate and successfully conjugated with a model antigen (ovalbumin, OVA) via N,N'-carbonyldiimidazole (CDI) mediated reaction, to achieve a maximum conjugation capacity at approximately 13μgμm-2. It was shown that different mechanisms governed the interactions between the OVA molecules and magnetite nanoparticles at different pH conditions. We evaluated as-synthesized SPION against commercially available magnetite nanoparticles. The cytotoxicity of these nanoparticles was investigated using mammalian cells. The reported CDI-mediated reaction can be considered as a potential approach in conjugating biomolecules onto magnetite or other biodegradable nanoparticles for vaccine delivery.
Resumo:
Isolated and purified organosolv eucalyptus wood lignin was depolymerized at different temperatures with and without mesostructured silica catalysts (i.e., SBA-15, MCM-41, ZrO2-SBA-15 and ZrO2-MCM-41). It was found that at 300 oC for 1 h with a solid/liquid ratio of 0.0175/1 (w/v), the SBA-15 catalyst with high acidity gave the highest syringol yield of 23.0% in a methanol/water mixture (50/50, wt/wt). Doping with ZrO2 over these catalysts did not increase syringol yield, but increased the total amount of solid residue. Gas chromatography-mass spectrometry (GC-MS) also identified other main phenolic compounds such as 1-(4-hydroxy-3,5-dimethoxyphenyl)-ethanone, 1,2-benzenediol, and 4-hydroxy-3,5-dimethoxy-benzaldehyde. Analysis of the lignin residues with Fourier transform-Infrared spectroscopy (FT-IR) indicated decreases in the absorption bands intensities of OH group, C-O stretching of syringyl ring and aromatic C-H deformation of syringol unit, and an increase in band intensities associated with the guaiacyl ring, confirming the type of products formed.
Resumo:
In order to protect our planet and ourselves from the adverse effects of excessive CO2 emissions and to prevent an imminent non-renewable fossil fuel shortage and energy crisis, there is a need to transform our current ‘fossil fuel dependent’ energy systems to new, clean, renewable energy sources. The world has recognized hydrogen as an energy carrier that complies with all the environmental quality and energy security, demands. This research aimed at producing hydrogen through anaerobic fermentation, using food waste as the substrate. Four food waste substrates were used: Rice, fish, vegetable and their mixture. Bio-hydrogen production was performed in lab scale reactors, using 250 mL serum bottles. The food waste was first mixed with the anaerobic sewage sludge and incubated at 37°C for 31 days (acclimatization). The anaerobic sewage sludge was then heat treated at 80°C for 15 min. The experiment was conducted at an initial pH of 5.5 and temperatures of 27, 35 and 55°C. The maximum cumulative hydrogen produced by rice, fish, vegetable and mixed food waste substrates were highest at 37°C (Rice =26.97±0.76 mL, fish = 89.70±1.25 mL, vegetable = 42.00±1.76 mL, mixed = 108.90±1.42 mL). A comparative study of acclimatized (the different food waste substrates were mixed with anaerobic sewage sludge and incubated at 37°C for 31days) and non-acclimatized food waste substrate (food waste that was not incubated with anaerobic sewage sludge) showed that acclimatized food waste substrate enhanced bio-hydrogen production by 90 - 100%.
Resumo:
It is not uncommon to hear a person of interest described by their height, build, and clothing (i.e. type and colour). These semantic descriptions are commonly used by people to describe others, as they are quick to relate and easy to understand. However such queries are not easily utilised within intelligent surveillance systems as they are difficult to transform into a representation that can be searched for automatically in large camera networks. In this paper we propose a novel approach that transforms such a semantic query into an avatar that is searchable within a video stream, and demonstrate state-of-the-art performance for locating a subject in video based on a description.
Resumo:
The surfaces of natural beidellite were modified with cationic surfactant octadecyl trimethylammonium bromide at different concentrations. The organo-beidellite adsorbent materials were then used for the removal of atrazine with the goal of investigating the mechanism for the adsorption of organic triazine herbicide from contaminated water. Changes on the surfaces and structure of beidellite were characterised by X-ray diffraction (XRD), thermogravimetric analysis (TGA), Fourier transform infrared (FTIR) spectroscopy, scanning electron microscopy (SEM) and BET surface analysis. Kinetics of the adsorption studies were also carried out which show that the adsorption capacity of the organoclays increases with increasing surfactant concentration up until 1.0 CEC surfactant loading, after which the adsorption capacity greatly decreases. TG analysis reveals that although the 2.0 CEC sample has the greatest percentage of surfactant by mass, most of it is present on external sites. The 0.5 CEC sample has the highest proportion of surfactant exchanged into the internal active sites and the 1.0 CEC sample accounts for the highest adsorption capacity. The goodness of fit of the pseudo-second order kinetic confirms that chemical adsorption, rather than physical adsorption, controls the adsorption rate of atrazine.