404 resultados para regularization
Resumo:
This booklet has been prepared from the actions for the university extension project entitled "Housing and Environment: building dialogue on urbanization of the settlement Ilha”. The study was conducted in a community located on the south end of the city of Almirante Tamandaré, among Barigui and Tanguá rivers. This project was conducted by a group of professors and students of the Federal University of Technology - Paraná (UTFPR), Campus Curitiba. The main objective was to investigate ways of intervention in housing and urbanization of the settlement Ilha, for the regularization of their properties. However, throughout the project, the group found that regularization of this settlement was not possible, in view of the risk of flooding on site. Therefore, this booklet provides information about the area and the rivers in their surroundings, on the positive aspects of living there, brings the story of some struggles of residents for better living conditions, as well as suggestions of funding sources for facilitating a possible relocation of existing families.
Resumo:
This book’s guideline is a description of the activities developed during the University Extension project entitled "Housing and Environment: building dialogue over the urbanization of the settlement Ilha", located in the Metropolitan Region of Curitiba. This project was coordinated by professors from the Federal University of Technology - Paraná (UTFPR). The initial objectives of the extension project were to investigate ways of intervention on the scenario of poor conditions of housing and urbanization of the settlement Ilha, for their land regularization. The book tells the story of the extension project, showing how the initial goals have changed with time. In addition, this book describes the frustrations and the learning process along the way, from the view of professors and students of UTFPR who actively participated in this project. This book also intends to report the feelings that the villagers attributed to their place of residence; the joys, stumbling and learning by using a participatory methodology from what Paulo Freire says about popular education. Moreover, the book brings the confrontation between the technical and popular vision on the regularization of the area.
Resumo:
This study presents research regarding affordable housing and their effects on the spatial reconfiguration of Natal/ RN, aiming to identify the specificities of the informality of urban land. This study aims to understand how informal housing market operates housing provision for the population located in popular informal settlements, through buying and selling market and rental market of residential properties irregular / illegal. This understanding will be through the neighborhood of Mãe Luisa, Special Area of Social Interest (SASI), located between neighborhoods with a population of high purchasing power and inserted into the tourist shaft of seaside of town. The characterization of informal housing market in Mãe Luiza, from buyers, sellers and renters, will help to understand how these informal transactions operate on SASI and housing provision for public policy development and implementation of housing programs and land regularization for low-income population, adequate to dynamic and reality of housing of informal areas
Resumo:
With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the author(s) of a biomedical publication, or implicit, such as the positive or negative sentiment that an author had when she wrote a product review; there may also be complex context such as the social network of the authors. Many applications require analysis of topic patterns over different contexts. For instance, analysis of search logs in the context of the user can reveal how we can improve the quality of a search engine by optimizing the search results according to particular users; analysis of customer reviews in the context of positive and negative sentiments can help the user summarize public opinions about a product; analysis of blogs or scientific publications in the context of a social network can facilitate discovery of more meaningful topical communities. Since context information significantly affects the choices of topics and language made by authors, in general, it is very important to incorporate it into analyzing and mining text data. In general, modeling the context in text, discovering contextual patterns of language units and topics from text, a general task which we refer to as Contextual Text Mining, has widespread applications in text mining. In this thesis, we provide a novel and systematic study of contextual text mining, which is a new paradigm of text mining treating context information as the ``first-class citizen.'' We formally define the problem of contextual text mining and its basic tasks, and propose a general framework for contextual text mining based on generative modeling of text. This conceptual framework provides general guidance on text mining problems with context information and can be instantiated into many real tasks, including the general problem of contextual topic analysis. We formally present a functional framework for contextual topic analysis, with a general contextual topic model and its various versions, which can effectively solve the text mining problems in a lot of real world applications. We further introduce general components of contextual topic analysis, by adding priors to contextual topic models to incorporate prior knowledge, regularizing contextual topic models with dependency structure of context, and postprocessing contextual patterns to extract refined patterns. The refinements on the general contextual topic model naturally lead to a variety of probabilistic models which incorporate different types of context and various assumptions and constraints. These special versions of the contextual topic model are proved effective in a variety of real applications involving topics and explicit contexts, implicit contexts, and complex contexts. We then introduce a postprocessing procedure for contextual patterns, by generating meaningful labels for multinomial context models. This method provides a general way to interpret text mining results for real users. By applying contextual text mining in the ``context'' of other text information management tasks, including ad hoc text retrieval and web search, we further prove the effectiveness of contextual text mining techniques in a quantitative way with large scale datasets. The framework of contextual text mining not only unifies many explorations of text analysis with context information, but also opens up many new possibilities for future research directions in text mining.
O descompasso de uma experiência: avaliação do Programa Habitar Brasil na Comunidade África-Natal/RN
Resumo:
This research deals with the evaluation of the Programa do Governo Federal para Urbanização de Favelas Habitar Brasil(1993) carried out in the Africa slum - Redinha neighbourhood in Natal-Rn. This study carried out in period from 2005 to 2006 searches to identify the effects of the actions proposed by Program in 1993-1994 about the current urbanistic configuration of the Africa community. It analyzes the effectiveness in the process of achievement of the considered objectives to habitation, communitity equipments, infrastructure and agrarian regularization. On the evaluation process, it has been as reference the works developed by Adauto Cardoso (2004), Blaine Worthen (2004), Ronaldo Garcia (2001) and Rosângela Paz (2006). About the Habitational Policy with approach to the Urbanistic Right and the right to the housing, the reflections by Raquel Rolnik, Nabil Bonduki, Ermínia Maricato, Saule Júnior, Betânia de Moraes Alfonsin and Edésio Fernandes are main references. To gauge the execution of the objectives proposed by Habitar Brasil in 1993, it has searched in the documentary data of the time and in information gotten in interviews with technicians that had participated of the program, consistent references on what was considered, what was executed and the process of the intervention of Habitar Brasil in the Africa community. The area analysis in 2005-2006 has developed on the base of the urbanistic survey of the current situation from the four performance lines of the Program: habitation, infrastructure, community equipments and agrarian regularization, with a current urbanistic evaluation of Africa considering the intervention carried out in 1993 and 1994. The study points out the context of Brazilian Habitational Policy where the Programa Habitar Brasil was launched, explaining the main principles of the Program. In terms of local, it empahsizes the administrative-political factors that had contributed so that Natal-Rn city has been pioneering in the resources captation of Habitar Brazil (1993). Considering Habitar Brazil in Africa, the work argues and presents the intervention diagnosis and the proposal, developed by Program in 1993 evidencing the local problem of the time. After that, it makes a current reading of the area, identifying in 2006 representative elements of Habitar Brasil (1993-1994) for the Africa community. It identifies significant advances in the constitution of the institucional apparatus of the plaining system of Habitation of Social Interest for the city of Natal and points the fragilities in the implementation of the urban infrastructure actions and above all in the achievement of the objectives of the agrarian regularization
Resumo:
The present research if inserts in the subject of the habitation of social interest and its relation with the sanitation infra-structure questions (sewer, water, draining and garbage). Having as study universe the narrow river of the Forty , situated one in the city of Manaus, capital of Amazon, approaches questions that if present between the necessities of housing and the especificidades of the natural environment, whose characteristics evidence limits for the implantation of adequate habitations. The objective is to analyze the possibilities and the limits of the urbanística regularization in the palafitas of the narrow rivers of Manaus, in view of the factors of habitability and ambient protection, expresses for the sanitation system - sanitary exhaustion, water supply, urban draining and garbage collection. The work approaches initially relative the conceptual aspects to the subject of social habitation in the country and its relation with the habitability factors, also focusing the question of the housing and the processes of urban informality in the city of Manaus. It deals with the process of constitution of the palafitas in the space of the city and its relation with the habitacionais politics, presenting the analysis of the implantation of the palafitas in relation to the sanitation infra-structure conditions (sewer, water, draining and garbage). As conclusion, it identifies to the possibilities and limits of urbanística regularization of the palafitas implanted to the long one of the narrow river of the Forty , taking in consideration the systems of the sanitation infrastructure
Resumo:
This thesis deals with tensor completion for the solution of multidimensional inverse problems. We study the problem of reconstructing an approximately low rank tensor from a small number of noisy linear measurements. New recovery guarantees, numerical algorithms, non-uniform sampling strategies, and parameter selection algorithms are developed. We derive a fixed point continuation algorithm for tensor completion and prove its convergence. A restricted isometry property (RIP) based tensor recovery guarantee is proved. Probabilistic recovery guarantees are obtained for sub-Gaussian measurement operators and for measurements obtained by non-uniform sampling from a Parseval tight frame. We show how tensor completion can be used to solve multidimensional inverse problems arising in NMR relaxometry. Algorithms are developed for regularization parameter selection, including accelerated k-fold cross-validation and generalized cross-validation. These methods are validated on experimental and simulated data. We also derive condition number estimates for nonnegative least squares problems. Tensor recovery promises to significantly accelerate N-dimensional NMR relaxometry and related experiments, enabling previously impractical experiments. Our methods could also be applied to other inverse problems arising in machine learning, image processing, signal processing, computer vision, and other fields.
Resumo:
We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
Freshwater mussel (Mollusca, Bivalvia, Unionoida) populations are one of the most endangered faunistic groups. Mussels play an important role in the functioning of aquatic ecosystems, because they are responsible for the filtration and purification of water. They have a complex life cycle, with a parasitic larvae and usually limited host fish species. The real status of these populations is still poorly understood worldwide. The objectives of the present work were the study of bioecology of duck mussel (Anodonta anatina L.) populations of Tua Basin (NE Portugal). It was made the characterization of the ecological status of Rabaçal, Tuela and Tua Rivers, selecting 15 sampling sites, equally distributed by the three rivers. Samplings were made in the winter of 2016, and several physico-chemical water parameters measured and two habitat quality indexes calculated (GQC and QBR indexes). Benthic macroinvertebrate communities were sampled based on the protocols established by the Water Framework Directive. Host fish populations for duck mussel were determined in laboratorial conditions, testing several native and exotic fish species. The results showed that several water quality variables (e.g. dissolved oxygen, conductivity, pH, total dissolved solids, and nutrients) can be used for the classification of river typology. Other responsive metrics were also determined to identify environmental degradation. For instances, hydromorphological conditions (GQC and QBR indexes) and biota related metrics (e.g. composition, distribution, abundance, diversity of invertebrate communities) contributed to the evaluation of the ecological integrity. The upper zones of Rabaçal and Tuela rivers were classified with excellent and good ecological integrity, while less quality was found in downstream zones. The host fish tests showed that only native species are effective hosts, essential for the conservation purposes of this mussel species. Threats, like pollution, sedimentation and river regularization (3 big dams are in construction or in filling phase), are the main cause of habitat loss for native mussel and fish populations in the future. Rehabilitation and mitigation measures are essential for these lotic ecosystems in order to preserve the prioritary habitats and the native species heavily threatened.
Resumo:
We consider a natural representation of solutions for Tikhonov functional equations. This will be done by applying the theory of reproducing kernels to the approximate solutions of general bounded linear operator equations (when defined from reproducing kernel Hilbert spaces into general Hilbert spaces), by using the Hilbert-Schmidt property and tensor product of Hilbert spaces. As a concrete case, we shall consider generalized fractional functions formed by the quotient of Bergman functions by Szegö functions considered from the multiplication operators on the Szegö spaces.
Resumo:
We study the chaos decomposition of self-intersection local times and their regularization, with a particular view towards Varadhan's renormalization for the planar Edwards model.
Resumo:
Object recognition has long been a core problem in computer vision. To improve object spatial support and speed up object localization for object recognition, generating high-quality category-independent object proposals as the input for object recognition system has drawn attention recently. Given an image, we generate a limited number of high-quality and category-independent object proposals in advance and used as inputs for many computer vision tasks. We present an efficient dictionary-based model for image classification task. We further extend the work to a discriminative dictionary learning method for tensor sparse coding. In the first part, a multi-scale greedy-based object proposal generation approach is presented. Based on the multi-scale nature of objects in images, our approach is built on top of a hierarchical segmentation. We first identify the representative and diverse exemplar clusters within each scale. Object proposals are obtained by selecting a subset from the multi-scale segment pool via maximizing a submodular objective function, which consists of a weighted coverage term, a single-scale diversity term and a multi-scale reward term. The weighted coverage term forces the selected set of object proposals to be representative and compact; the single-scale diversity term encourages choosing segments from different exemplar clusters so that they will cover as many object patterns as possible; the multi-scale reward term encourages the selected proposals to be discriminative and selected from multiple layers generated by the hierarchical image segmentation. The experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012 segmentation dataset demonstrate the accuracy and efficiency of our object proposal model. Additionally, we validate our object proposals in simultaneous segmentation and detection and outperform the state-of-art performance. To classify the object in the image, we design a discriminative, structural low-rank framework for image classification. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier.
Resumo:
Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.