880 resultados para Iterative decoding
Resumo:
INTRODUCTION: Focal therapy may reduce the toxicity of current radical treatments while maintaining the oncological benefit. Irreversible electroporation (IRE) has been proposed to be tissue selective and so might have favourable characteristics compared to the currently used prostate ablative technologies. The aim of this trial is to determine the adverse events, genito-urinary side effects and early histological outcomes of focal IRE in men with localised prostate cancer. METHODS: This is a single centre prospective development (stage 2a) study following the IDEAL recommendations for evaluating new surgical procedures. Twenty men who have MRI-visible disease localised in the anterior part of the prostate will be recruited. The sample size permits a precision estimate around key functional outcomes. Inclusion criteria include PSA ≤ 15 ng/ml, Gleason score ≤ 4 + 3, stage T2N0M0 and absence of clinically significant disease outside the treatment area. Treatment delivery will be changed in an adaptive iterative manner so as to allow optimisation of the IRE protocol. After focal IRE, men will be followed during 12 months using validated patient reported outcome measures (IPSS, IIEF-15, UCLA-EPIC, EQ-5D, FACT-P, MAX-PC). Early disease control will be evaluated by mpMRI and targeted transperineal biopsy of the treated area at 6 months. DISCUSSION: The NEAT trial will assess the early functional and disease control outcome of focal IRE using an adaptive design. Our protocol can provide guidance for designing an adaptive trial to assess new surgical technologies in the challenging landscape of health technology assessment in prostate cancer treatment.
Resumo:
Situating events and traces in time is an essential problem in investigations. To date, among the typical ques- 21¦tions issued in forensic science, time has generally been unexplored. The reason for this can be traced to the 22¦complexity of the overall problem, addressed by several scientists in very limited projects usually stimulated 23¦by a specific case. Considering that such issues are recurrent and transcending the treatment of each trace 24¦separately, the formalisation of a framework to address dating issues in criminal investigation is undeniably 25¦needed. Through an iterative process consisting of extracting recurrent aspects discovered from the study of 26¦problems encountered by practitioners and reported in the literature, common mechanisms were extracted 27¦and provide understanding of underlying factors encountered in forensic practise. Three complementary ap- 28¦proaches are thus highlighted and described to formalise a preliminary framework that can be applied for the 29¦dating of traces, objects, persons and indirectly events.
Resumo:
Inference of Markov random field images segmentation models is usually performed using iterative methods which adapt the well-known expectation-maximization (EM) algorithm for independent mixture models. However, some of these adaptations are ad hoc and may turn out numerically unstable. In this paper, we review three EM-like variants for Markov random field segmentation and compare their convergence properties both at the theoretical and practical levels. We specifically advocate a numerical scheme involving asynchronous voxel updating, for which general convergence results can be established. Our experiments on brain tissue classification in magnetic resonance images provide evidence that this algorithm may achieve significantly faster convergence than its competitors while yielding at least as good segmentation results.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
Peer-reviewed
Resumo:
Many engineering problems that can be formulatedas constrained optimization problems result in solutionsgiven by a waterfilling structure; the classical example is thecapacity-achieving solution for a frequency-selective channel.For simple waterfilling solutions with a single waterlevel and asingle constraint (typically, a power constraint), some algorithmshave been proposed in the literature to compute the solutionsnumerically. However, some other optimization problems result insignificantly more complicated waterfilling solutions that includemultiple waterlevels and multiple constraints. For such cases, itmay still be possible to obtain practical algorithms to evaluate thesolutions numerically but only after a painstaking inspection ofthe specific waterfilling structure. In addition, a unified view ofthe different types of waterfilling solutions and the correspondingpractical algorithms is missing.The purpose of this paper is twofold. On the one hand, itoverviews the waterfilling results existing in the literature from aunified viewpoint. On the other hand, it bridges the gap betweena wide family of waterfilling solutions and their efficient implementationin practice; to be more precise, it provides a practicalalgorithm to evaluate numerically a general waterfilling solution,which includes the currently existing waterfilling solutions andothers that may possibly appear in future problems.
Resumo:
This work provides a general framework for the design of second-order blind estimators without adopting anyapproximation about the observation statistics or the a prioridistribution of the parameters. The proposed solution is obtainedminimizing the estimator variance subject to some constraints onthe estimator bias. The resulting optimal estimator is found todepend on the observation fourth-order moments that can be calculatedanalytically from the known signal model. Unfortunately,in most cases, the performance of this estimator is severely limitedby the residual bias inherent to nonlinear estimation problems.To overcome this limitation, the second-order minimum varianceunbiased estimator is deduced from the general solution by assumingaccurate prior information on the vector of parameters.This small-error approximation is adopted to design iterativeestimators or trackers. It is shown that the associated varianceconstitutes the lower bound for the variance of any unbiasedestimator based on the sample covariance matrix.The paper formulation is then applied to track the angle-of-arrival(AoA) of multiple digitally-modulated sources by means ofa uniform linear array. The optimal second-order tracker is comparedwith the classical maximum likelihood (ML) blind methodsthat are shown to be quadratic in the observed data as well. Simulationshave confirmed that the discrete nature of the transmittedsymbols can be exploited to improve considerably the discriminationof near sources in medium-to-high SNR scenarios.
Resumo:
This paper addresses the estimation of the code-phase(pseudorange) and the carrier-phase of the direct signal received from a direct-sequence spread-spectrum satellite transmitter. Thesignal is received by an antenna array in a scenario with interferenceand multipath propagation. These two effects are generallythe limiting error sources in most high-precision positioning applications.A new estimator of the code- and carrier-phases is derivedby using a simplified signal model and the maximum likelihood(ML) principle. The simplified model consists essentially ofgathering all signals, except for the direct one, in a component withunknown spatial correlation. The estimator exploits the knowledgeof the direction-of-arrival of the direct signal and is much simplerthan other estimators derived under more detailed signal models.Moreover, we present an iterative algorithm, that is adequate for apractical implementation and explores an interesting link betweenthe ML estimator and a hybrid beamformer. The mean squarederror and bias of the new estimator are computed for a numberof scenarios and compared with those of other methods. The presentedestimator and the hybrid beamforming outperform the existingtechniques of comparable complexity and attains, in manysituations, the Cramér–Rao lower bound of the problem at hand.
Resumo:
In numerical linear algebra, students encounter earlythe iterative power method, which finds eigenvectors of a matrixfrom an arbitrary starting point through repeated normalizationand multiplications by the matrix itself. In practice, more sophisticatedmethods are used nowadays, threatening to make the powermethod a historical and pedagogic footnote. However, in the contextof communication over a time-division duplex (TDD) multipleinputmultiple-output (MIMO) channel, the power method takes aspecial position. It can be viewed as an intrinsic part of the uplinkand downlink communication switching, enabling estimationof the eigenmodes of the channel without extra overhead. Generalizingthe method to vector subspaces, communication in thesubspaces with the best receive and transmit signal-to-noise ratio(SNR) is made possible. In exploring this intrinsic subspace convergence(ISC), we show that several published and new schemes canbe cast into a common framework where all members benefit fromthe ISC.
Resumo:
In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.
Resumo:
Työn tavoitteena on ollut selvittää runkoelementtitehtaan materiaalien hankinnanorganisointi ja ohjaus nykytilanteessa. Tutkimuksessa on pyritty löytämään materiaaliprosessin kannalta toimintaa rajoittavia pullonkauloja sekä etsitty kehitystoimenpiteitä ongelmakohtiin prosessiajattelun näkökulmasta. Tarkastelun kohteena on ollut yrityksen operatiivinen materiaaliprosessi nimikkeiden tilauksesta varastointiin. Työssä on käytetty kvalitatiivista tutkimusmenetelmää ja empiirisen osuuden tiedot on hankittu haastatteluilla ja laatuohjeistuksesta. Yrityksen nykytilanne on mallinnettu prosessikaavioiden avulla, ja on selvitetty mitkä ovat prosessin tieto- ja materiaalivirrat sekä mitkä ovat tärkeimmät toiminnot materiaaliketjussa. Prosessianalyysin ja haastatteluiden pohjalta määriteltiin kehitysehdotukset prosessin suorituskyvyn tehostamiseksi. Nykytilan kartoituksen jälkeen suurimmat ongelmat materiaaliprosessissa liittyvät tilausten ajoitusten hallintaan, muutoksien vaikutukseen prosessissa sekä vastuiden ja kokonaishallinnan puuttumiseen. Ongelmat johtuvat pääosin rakennusalan projektimaisesta luonteesta. Yhdeksi kehityskohteeksi nousi myös tiedonhallinnan tehostaminen, etenkin prosessin vaiheiden automatisointi tietojärjestelmiä hyödyntäen. Toimintaan on pyritty etsimään ratkaisuja prosessiajattelun avulla, mikä osoittautui sopivaksi menetelmäksi toiminnan kehittämisessä. Tutkimuksen tuloksena syntyi kehitysehdotuksia, joiden pohjalta muodostettiin uusi materiaalien ohjauksen toimintamalli. Toimintamallissa tärkeimpänä on ennakkotiedon hyödyntäminen tilaussuunnittelun tukena. Alustavat materiaalimäärät välitetään ennakkotietona myös toimittajille, jotka voivat paremmin suunnitella omaa tuotantokapasiteettiaan. Tilausten suunnittelu tapahtuu tarkentuvasti ja lopullinen materiaalimäärä ja tarveajankohta välitetään kotiinkutsun yhteydessä. Toimintamalliin liittyy lisäksi materiaalien vastaanoton ja varastoinnin kehittäminen sekä muutoksien hallinta tietojärjestelmää paremmin hyödyntäen. Kriittisintä materiaaliprosessissa tulee olemaan prosessin tiedonhallinta ja siihen liittyvät vastuukysymykset.
Resumo:
Within a developing organism, cells require information on where they are in order to differentiate into the correct cell-type. Pattern formation is the process by which cells acquire and process positional cues and thus determine their fate. This can be achieved by the production and release of a diffusible signaling molecule, called a morphogen, which forms a concentration gradient: exposure to different morphogen levels leads to the activation of specific signaling pathways. Thus, in response to the morphogen gradient, cells start to express different sets of genes, forming domains characterized by a unique combination of differentially expressed genes. As a result, a pattern of cell fates and specification emerges.Though morphogens have been known for decades, it is not yet clear how these gradients form and are interpreted in order to yield highly robust patterns of gene expression. During my PhD thesis, I investigated the properties of Bicoid (Bcd) and Decapentaplegic (Dpp), two morphogens involved in the patterning of the anterior-posterior axis of Drosophila embryo and wing primordium, respectively. In particular, I have been interested in understanding how the pattern proportions are maintained across embryos of different sizes or within a growing tissue. This property is commonly referred to as scaling and is essential for yielding functional organs or organisms. In order to tackle these questions, I analysed fluorescence images showing the pattern of gene expression domains in the early embryo and wing imaginal disc. After characterizing the extent of these domains in a quantitative and systematic manner, I introduced and applied a new scaling measure in order to assess how well proportions are maintained. I found that scaling emerged as a universal property both in early embryos (at least far away from the Bcd source) and in wing imaginal discs (across different developmental stages). Since we were also interested in understanding the mechanisms underlying scaling and how it is transmitted from the morphogen to the target genes down in the signaling cascade, I also quantified scaling in mutant flies where this property could be disrupted. While scaling is largely conserved in embryos with altered bcd dosage, my modeling suggests that Bcd trapping by the nuclei as well as pre-steady state decoding of the morphogen gradient are essential to ensure precise and scaled patterning of the Bcd signaling cascade. In the wing imaginal disc, it appears that as the disc grows, the Dpp response expands and scales with the tissue size. Interestingly, scaling is not perfect at all positions in the field. The scaling of the target gene domains is best where they have a function; Spalt, for example, scales best at the position in the anterior compartment where it helps to form one of the anterior veins of the wing. Analysis of mutants for pentagone, a transcriptional target of Dpp that encodes a secreted feedback regulator of the pathway, indicates that Pentagone plays a key role in scaling the Dpp gradient activity.
Resumo:
The diffusion of mobile telephony began in 1971 in Finland, when the first car phones, called ARP1 were taken to use. Technologies changed from ARP to NMT and later to GSM. The main application of the technology, however, was voice transfer. The birth of the Internet created an open public data network and easy access to other types of computer-based services over networks. Telephones had been used as modems, but the development of the cellular technologies enabled automatic access from mobile phones to Internet. Also other wireless technologies, for instance Wireless LANs, were also introduced. Telephony had developed from analog to digital in fixed networks and allowed easy integration of fixed and mobile networks. This development opened a completely new functionality to computers and mobile phones. It also initiated the merger of the information technology (IT) and telecommunication (TC) industries. Despite the arising opportunity for firms' new competition the applications based on the new functionality were rare. Furthermore, technology development combined with innovation can be disruptive to industries. This research focuses on the new technology's impact on competition in the ICT industry through understanding the strategic needs and alternative futures of the industry's customers. The change speed inthe ICT industry is high and therefore it was valuable to integrate the DynamicCapability view of the firm in this research. Dynamic capabilities are an application of the Resource-Based View (RBV) of the firm. As is stated in the literature, strategic positioning complements RBV. This theoretical framework leads theresearch to focus on three areas: customer strategic innovation and business model development, external future analysis, and process development combining these two. The theoretical contribution of the research is in the development of methodology integrating theories of the RBV, dynamic capabilities and strategic positioning. The research approach has been constructive due to the actual managerial problems initiating the study. The requirement for iterative and innovative progress in the research supported the chosen research approach. The study applies known methods in product development, for instance, innovation process in theGroup Decision Support Systems (GDSS) laboratory and Quality Function Deployment (QFD), and combines them with known strategy analysis tools like industry analysis and scenario method. As the main result, the thesis presents the strategic innovation process, where new business concepts are used to describe the alternative resource configurations and scenarios as alternative competitive environments, which can be a new way for firms to achieve competitive advantage in high-velocity markets. In addition to the strategic innovation process as a result, thestudy has also resulted in approximately 250 new innovations for the participating firms, reduced technology uncertainty and helped strategic infrastructural decisions in the firms, and produced a knowledge-bank including data from 43 ICT and 19 paper industry firms between the years 1999 - 2004. The methods presentedin this research are also applicable to other industries.
Resumo:
Background: Current advances in genomics, proteomics and other areas of molecular biology make the identification and reconstruction of novel pathways an emerging area of great interest. One such class of pathways is involved in the biogenesis of Iron-Sulfur Clusters (ISC). Results: Our goal is the development of a new approach based on the use and combination of mathematical, theoretical and computational methods to identify the topology of a target network. In this approach, mathematical models play a central role for the evaluation of the alternative network structures that arise from literature data-mining, phylogenetic profiling, structural methods, and human curation. As a test case, we reconstruct the topology of the reaction and regulatory network for the mitochondrial ISC biogenesis pathway in S. cerevisiae. Predictions regarding how proteins act in ISC biogenesis are validated by comparison with published experimental results. For example, the predicted role of Arh1 and Yah1 and some of the interactions we predict for Grx5 both matches experimental evidence. A putative role for frataxin in directly regulating mitochondrial iron import is discarded from our analysis, which agrees with also published experimental results. Additionally, we propose a number of experiments for testing other predictions and further improve the identification of the network structure. Conclusion: We propose and apply an iterative in silico procedure for predictive reconstruction of the network topology of metabolic pathways. The procedure combines structural bioinformatics tools and mathematical modeling techniques that allow the reconstruction of biochemical networks. Using the Iron Sulfur cluster biogenesis in S. cerevisiae as a test case we indicate how this procedure can be used to analyze and validate the network model against experimental results. Critical evaluation of the obtained results through this procedure allows devising new wet lab experiments to confirm its predictions or provide alternative explanations for further improving the models.
Resumo:
What we put into our mouths can nourish or kill us. A new study uses state-of-the-art electroencephalogram decoding to detail how we and our brains know what we taste.