899 resultados para information bottleneck method
Resumo:
This report presents information on the development of a system for commercial harvesting of Rastrineobola argentea (Mukene) in Lake Victoria. The objective of this work is to develop a system for commercial Harvesting of Mukene with: •Targeted output above 1,000 kg per working night per boat. •Target area to be off shore waters of Lake Victoria. •Drying under hygienic conditions for production of high quality poultry and animal feeds. •Supply to be continuous with predicable prices
Resumo:
The paper presents a critical analysis of the extant literature pertaining to the networking behaviours of young jobseekers in both offline and online environments. A framework derived from information behaviour theory is proposed as a basis for conducting further research in this area. Method. Relevant material for the review was sourced from key research domains such as library and information science, job search research, and organisational research. Analysis. Three key research themes emerged from the analysis of the literature: (1) social networks, and the use of informal channels of information during job search, (2) the role of networking behaviours in job search, and (3) the adoption of social media tools. Tom Wilson’s general model of information behaviour was also identified as a suitable framework to conduct further research. Results. Social networks have a crucial informational utility during the job search process. However, the processes whereby young jobseekers engage in networking behaviours, both offline and online, remain largely unexplored. Conclusion. Identification and analysis of the key research themes reveal opportunities to acquire further knowledge regarding the networking behaviours of young jobseekers. Wilson’s model can be used as a framework to provide a holistic understanding of the networking process, from an information behaviour perspective.
Resumo:
Stakeholder engagement is important for successful management of natural resources, both to make effective decisions and to obtain support. However, in the context of coastal management, questions remain unanswered on how to effectively link decisions made at the catchment level with objectives for marine biodiversity and fisheries productivity. Moreover, there is much uncertainty on how to best elicit community input in a rigorous manner that supports management decisions. A decision support process is described that uses the adaptive management loop as its basis to elicit management objectives, priorities and management options using two case studies in the Great Barrier Reef, Australia. The approach described is then generalised for international interest. A hierarchical engagement model of local stakeholders, regional and senior managers is used. The result is a semi-quantitative generic elicitation framework that ultimately provides a prioritised list of management options in the context of clearly articulated management objectives that has widespread application for coastal communities worldwide. The case studies show that demand for local input and regional management is high, but local influences affect the relative success of both engagement processes and uptake by managers. Differences between case study outcomes highlight the importance of discussing objectives prior to suggesting management actions, and avoiding or minimising conflicts at the early stages of the process. Strong contributors to success are a) the provision of local information to the community group, and b) the early inclusion of senior managers and influencers in the group to ensure the intellectual and time investment is not compromised at the final stages of the process. The project has uncovered a conundrum in the significant gap between the way managers perceive their management actions and outcomes, and community's perception of the effectiveness (and wisdom) of these same management actions.
Resumo:
We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
International audience
Resumo:
International audience
Resumo:
In 2013, a series of posters began appearing in Washington, DC’s Metro system. Each declared “The internet: Your future depends on it” next to a photo of a middle-aged black Washingtonian, and an advertisement for the municipal government’s digital training resources. This hopeful discourse is familiar but where exactly does it come from? And how are our public institutions reorganized to approach the problem of poverty as a problem of technology? The Clinton administration’s ‘digital divide’ policy program popularized this hopeful discourse about personal computing powering social mobility, positioned internet startups as the ‘right’ side of the divide, and charged institutions of social reproduction such as schools and libraries with closing the gap and upgrading themselves in the image of internet startups. After introducing the development regime that builds this idea into the urban landscape through what I call the ‘political economy of hope’, and tracing the origin of the digital divide frame, this dissertation draws on three years of comparative ethnographic fieldwork in startups, schools, and libraries to explore how this hope is reproduced in daily life, becoming the common sense that drives our understanding of and interaction with economic inequality and reproduces that inequality in turn. I show that the hope in personal computing to power social mobility becomes a method of securing legitimacy and resources for both white émigré technologists and institutions of social reproduction struggling to understand and manage the persistent poverty of the information economy. I track the movement of this common sense between institutions, showing how the political economy of hope transforms them as part of a larger development project. This dissertation models a new, relational direction for digital divide research that grounds the politics of economic inequality with an empirical focus on technologies of poverty management. It demands a conceptual shift that sees the digital divide not as a bug within the information economy, but a feature of it.
Resumo:
Objective: To analyze pharmaceutical interventions that have been carried out with the support of an automated system for validation of treatments vs. the traditional method without computer support. Method: The automated program, ALTOMEDICAMENTOS® version 0, has 925 052 data with information regarding approximately 20 000 medicines, analyzing doses, administration routes, number of days with such a treatment, dosing in renal and liver failure, interactions control, similar drugs, and enteral medicines. During eight days, in four different hospitals (high complexity with over 1 000 beds, 400-bed intermediate, geriatric and monographic), the same patients and treatments were analyzed using both systems. Results: 3,490 patients were analyzed, with 42 155 different treatments. 238 interventions were performed using the traditional system (interventions 0.56% / possible interventions) vs. 580 (1.38%) with the automated one. Very significant pharmaceutical interventions were 0.14% vs. 0.46%; significant was 0.38% vs. 0.90%; non-significant was 0.05% vs. 0.01%, respectively. If both systems are simultaneously used, interventions are performed in 1.85% vs. 0.56% with just the traditional system. Using only the traditional model, 30.5% of the possible interventions are detected, whereas without manual review and only the automated one, 84% of the possible interventions are detected. Conclusions: The automated system increases pharmaceutical interventions between 2.43 to 3.64 times. According to the results of this study the traditional validation system needs to be revised relying on automated systems. The automated program works correctly in different hospitals.
Resumo:
Part 17: Risk Analysis
Resumo:
Analysis of data without labels is commonly subject to scrutiny by unsupervised machine learning techniques. Such techniques provide more meaningful representations, useful for better understanding of a problem at hand, than by looking only at the data itself. Although abundant expert knowledge exists in many areas where unlabelled data is examined, such knowledge is rarely incorporated into automatic analysis. Incorporation of expert knowledge is frequently a matter of combining multiple data sources from disparate hypothetical spaces. In cases where such spaces belong to different data types, this task becomes even more challenging. In this paper we present a novel immune-inspired method that enables the fusion of such disparate types of data for a specific set of problems. We show that our method provides a better visual understanding of one hypothetical space with the help of data from another hypothetical space. We believe that our model has implications for the field of exploratory data analysis and knowledge discovery.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2010
Resumo:
Overrecentdecades,remotesensinghasemergedasaneffectivetoolforimprov- ing agriculture productivity. In particular, many works have dealt with the problem of identifying characteristics or phenomena of crops and orchards on different scales using remote sensed images. Since the natural processes are scale dependent and most of them are hierarchically structured, the determination of optimal study scales is mandatory in understanding these processes and their interactions. The concept of multi-scale/multi- resolution inherent to OBIA methodologies allows the scale problem to be dealt with. But for that multi-scale and hierarchical segmentation algorithms are required. The question that remains unsolved is to determine the suitable scale segmentation that allows different objects and phenomena to be characterized in a single image. In this work, an adaptation of the Simple Linear Iterative Clustering (SLIC) algorithm to perform a multi-scale hierarchi- cal segmentation of satellite images is proposed. The selection of the optimal multi-scale segmentation for different regions of the image is carried out by evaluating the intra- variability and inter-heterogeneity of the regions obtained on each scale with respect to the parent-regions defined by the coarsest scale. To achieve this goal, an objective function, that combines weighted variance and the global Moran index, has been used. Two different kinds of experiment have been carried out, generating the number of regions on each scale through linear and dyadic approaches. This methodology has allowed, on the one hand, the detection of objects on different scales and, on the other hand, to represent them all in a sin- gle image. Altogether, the procedure provides the user with a better comprehension of the land cover, the objects on it and the phenomena occurring.