911 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background - Image blurring in Full Field Digital Mammography (FFDM) is reported to be a problem within many UK breast screening units resulting in significant proportion of technical repeats/recalls. Our study investigates monitors of differing pixel resolution, and whether there is a difference in blurring detection between a 2.3 MP technical review monitor and a 5MP standard reporting monitor. Methods - Simulation software was created to induce different magnitudes of blur on 20 artifact free FFDM screening images. 120 blurred and non-blurred images were randomized and displayed on the 2.3 and 5MP monitors; they were reviewed by 28 trained observers. Monitors were calibrated to the DICOM Grayscale Standard Display Function. T-test was used to determine whether significant differences exist in blurring detection between the monitors. Results - The blurring detection rate on the 2.3MP monitor for 0.2, 0.4, 0.6, 0.8 and 1 mm blur was 46, 59, 66, 77and 78% respectively; and on the 5MP monitor 44, 70, 83 , 96 and 98%. All the non-motion images were identified correctly. A statistical difference (p <0.01) in the blurring detection rate between the two monitors was demonstrated. Conclusions - Given the results of this study and knowing that monitors as low as 1 MP are used in clinical practice, we speculate that technical recall/repeat rates because of blurring could be reduced if higher resolution monitors are used for technical review at the time of imaging. Further work is needed to determine monitor minimum specification for visual blurring detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pop-up archival tags (PAT) provide summary and high-resolution time series data at predefined temporal intervals. The limited battery capabilities of PATs often restrict the transmission success and thus temporal coverage of both data products. While summary data are usually less affected by this problem, as a result of its lower size, it might be less informative. We here investigate the accuracy and feasibility of using temperature at depth summary data provided by PATs to describe encountered oceanographic conditions. Interpolated temperature at depth summary data was found to provide accurate estimates of three major thermal water column structure indicators: thermocline depth, stratification and ocean heat content. Such indicators are useful for the interpretation of the tagged animal's horizontal and vertical behaviour. The accuracy of these indicators was found to be particularly sensitive to the number of data points available in the first 100 m, which in turn depends on the vertical behaviour of the tagged animal. Based on our results, we recommend the use of temperature at depth summary data as opposed to temperature time series data for PAT studies; doing so during the tag programming will help to maximize the amount of transmitted time series data for other key data types such as light levels and depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkittu yritys on suomalainen maaleja ja lakkoja kansainvälisesti valmistava ja myyvä toimija. Yrityksessä otettiin vuonna 2010 käyttöön uudet tuotannon ja toimitusketjun tavoitteet ja suunnitelmat ja tämä tutkimus on osa tuota kokonaisvaltaista kehittämissuuntaa. Tutkimuksessa käsitellään tuotannon ja kunnossapidon tehokkuuden parantamis- ja mittaustyökalu OEE:tä ja tuotevaihtoaikojen pienentämiseen tarkoitettua SMED -työkalua. Työn teoriaosuus perustuu lähinnä akateemisiin julkaisuihin, mutta myös haastatteluihin, kirjoihin, internet sivuihin ja yhteen vuosikertomukseen. Empiriaosuudessa OEE:n käyttöönoton ongelmia ja onnistumista tutkittiin toistettavalla käyttäjäkyselyllä. OEE:n potentiaalia ja käyttöönottoa tutkittiin myös tarkastelemalla tuotanto- ja käytettävyysdataa, jota oli kerätty tuotantolinjalta. SMED:iä tutkittiin siihen perustuvan tietokoneohjelman avulla. SMED:iä tutkittiin teoreettisella tasolla, eikä sitä implementoitu vielä käytäntöön. Tutkimustuloksien mukaan OEE ja SMED sopivat hyvin esimerkkiyritykselle ja niissä on paljon potentiaalia. OEE ei ainoastaan paljasta käytettävyyshäviöiden määrää, mutta myös niiden rakenteen. OEE -tulosten avulla yritys voi suunnata rajalliset tuotannon ja kunnossapidon parantamisen resurssit oikeisiin paikkoihin. Työssä käsiteltävä tuotantolinja ei tuottanut mitään 56 % kaikesta suunnitellusta tuotantoajasta huhtikuussa 2016. Linjan pysähdyksistä ajallisesti 44 % johtui vaihto-, aloitus- tai lopetustöistä. Tuloksista voidaan päätellä, että käytettävyyshäviöt ovat vakava ongelma yrityksen tuotannontehokkuudessa ja vaihtotöiden vähentäminen on tärkeä kehityskohde. Vaihtoaikaa voitaisiin vähentää ~15 % yksinkertaisilla ja halvoilla SMED:illä löydetyillä muutoksilla työjärjestyksessä ja työkaluissa. Parannus olisi vielä suurempi kattavimmilla muutoksilla. SMED:in suurin potentiaali ei välttämättä ole vaihtoaikojen lyhentämisessä vaan niiden standardisoinnissa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effective supplier evaluation and purchasing processes are of vital importance to business organizations, making the suppliers selection problem a fundamental key issue to their success. We consider a complex supplier selection problem with multiple products where minimum package quantities, minimum order values related to delivery costs, and discounted pricing schemes are taken into account. Our main contribution is to present a mixed integer linear programming (MILP) model for this supplier selection problem. The model is used to solve several examples including three real case studies from an electronic equipment assembly company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse’s assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este paper tem como objetivo analisar o problema dos graus de separação do corpo e da alma no Fédon de Platão, em busca tanto de seus pressupostos ontológicos como de suas consequências epistemológicas. Apesar deste diálogo ser normalmente abordado como pedra miliar literária e filosófica para todos os dualismos psico‑físicos da história de nosso pensamento, entendo que é possível distinguir dois sentidos fundamentais, duas maneiras diferentes de pensar esta separação. O primeiro sentido indicaria uma separação intencional, isto é, fundamentalmente dependente do que o filósofo pensa ou com aquilo do qual o filósofo se procurar curar: o filósofo, como tal, se curaria da alma, mas não se curaria do corpo. Uma segunda maneira de pensar esta separação entre corpo e alma é aquela que privilegia a ideia de uma separação ontológica segundo a qual a alma seria, a tal ponto independente do corpo, que poderia sobreviver após a morte deste. Apesar do sucesso que ambas as abordagens tiveram ao longo da história do platonismo até nossos dias, a duplicidade dos sentidos expressos contém contudo, em si, uma irrevogável ambiguidade e tensão. O objetivo deste paper é o de propor uma solução diferente para a referida ambiguidade. A nossa proposta tem como ponto de partida, a consideração ontológica dos graus de plasticidade da alma, que Bostock (1986, p.119 @Phd. 79c), em seu comentário ao diálogo, chama ‘traços camaleônicos da alma’, isto é, como se a alma pudesse assumir feições corpóreas para conhecer a realidade sensível. A separação entre corpo e alma, antes do que pressuposto ontológico, parece precisar de um esforços permanente do indivíduo, tanto em sentido epistemológico como em sentido ético. _______________________________________________________________________________ ABSTRACT

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Design da Comunicação, apresentada na Universidade de Lisboa - Faculdade de Arquitetura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese (doutorado)—Universidade de Brasília, Centro de Desenvolvimento Sustentável, 2015.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O princípio do posicionamento por GNSS baseia-se, resumidamente, na resolução de um problema matemático que envolve a observação das distâncias do utilizador a um conjunto de satélites com coordenadas conhecidas. A posição resultante pode ser calculada em modo absoluto ou relativo. O posicionamento absoluto necessita apenas de um recetor para a determinação da posição. Por sua vez, o posicionamento relativo implica a utilização de estações de referência e envolve a utilização de mais recetores para além do pertencente ao próprio utilizador. Assim, os métodos mais utilizados na determinação da posição de uma plataforma móvel, com exatidão na ordem dos centímetros, baseiam-se neste último tipo de posicionamento. Contudo, têm a desvantagem de estarem dependentes de estações de referência, com um alcance limitado, e requerem observações simultâneas dos mesmos satélites por parte da estação e do recetor. Neste sentido foi desenvolvida uma nova metodologia de posicionamento GNSS em modo absoluto, através da modelação ou remoção dos erros associados a cada componente das equações de observação, da utilização de efemérides precisas e correções aos relógios dos satélites. Este método de posicionamento tem a designação Precise Point Positioning (PPP) e permite manter uma elevada exatidão, equivalente à dos sistemas de posicionamento relativo. Neste trabalho, após um estudo aprofundado do tema, foi desenvolvida uma aplicação PPP, de índole académica, com recurso à biblioteca de classes C++ do GPS Toolkit, que permite determinar a posição e velocidade do recetor em modo cinemático e em tempo real. Esta aplicação foi ensaiada utilizando dados de observação de uma estação estática (processados em modo cinemático) e de uma estação em movimento instalada no NRP Auriga. Os resultados obtidos permitiram uma exatidão para a posição na ordem decimétrica e para a velocidade na ordem do cm/s.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the 1990s, the police service in Victoria, Australia, faced a crisis of community confidence due to a spate of civilian deaths from police shootings. In that decade, twice as many civilians died at the hands of the police in Victoria than in every other Australian state combined. Most of those killed were mentally ill and affected by drugs and alcohol, and were rarely a serious threat except to themselves. The problem was also almost entirely an urban phenomenon. Shootings in rural communities, where mentally ill people were more likely to be personally known to local police, were (and remain) almost unknown. The large number of fatalities was recognised as a serious threat to public confidence, and Victoria Police introduced a ground-breaking training programme, Operation Beacon. Operating procedures and weapons training were fundamentally changed, to focus on de-escalation of conflict and avoiding or minimising police use of force. In the short term, Operation Beacon was successful. Shooting incidents were dramatically reduced. However, during the first decade of the new century, the number of civilians being killed again increased. This article examines Operation Beacon, both as a successful model for reducing civilian deaths at the hand of police and as a cautionary tale for police reform. We argue that the lessons of Operation Beacon have been gradually forgotten and that old habits and attitudes resurfaced. Fatal shootings of mentally ill civilians can be prevented, but if success is to be other than temporary, the Beacon philosophy must be continually reemphasised by police management.