880 resultados para Iterative decoding
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
"Beauty-contest" is a game in which participants have to choose, typically, a number in [0,100], the winner being the person whose number is closest to a proportion of the average of all chosen numbers. We describe and analyze Beauty-contest experiments run in newspapers in UK, Spain, and Germany and find stable patterns of behavior across them, despite the uncontrollability of these experiments. These results are then compared with lab experiments involving undergraduates and game theorists as subjects, in what must be one of the largest empirical corroborations of interactive behavior ever tried. We claim that all observed behavior, across a wide variety of treatments and subject pools, can be interpretedas iterative reasoning. Level-1 reasoning, Level-2 reasoning and Level-3 reasoning are commonly observed in all the samples, while the equilibrium choice (Level-Maximum reasoning) is only prominently chosen by newspaper readers and theorists. The results show the empirical power of experiments run with large subject-pools, and open the door for more experimental work performed on the rich platform offered by newspapers and magazines.
Resumo:
Calceology is the study of recovered archaeological leather footwear and is comprised of conservation, documentation and identification of leather shoe components and shoe styles. Recovered leather shoes are complex artefacts that present technical, stylistic and personal information about the culture and people that used them. The current method in calceological research for typology and chronology is by comparison with parallel examples, though its use poses problems by an absence of basic definitions and the lack of a taxonomic hierarchy. The research findings of the primary cutting patterns, used for making all leather footwear, are integrated with the named style method and the Goubitz notation, resulting in a combined methodology as a basis for typological organisation for recovered footwear and a chronology for named shoe styles. The history of calceological research is examined in chapter two and is accompanied by a review of methodological problems as seen in the literature. Through the examination of various documentation and research techniques used during the history of calceological studies, the reasons why a standard typology and methodology failed to develop are investigated. The variety and continual invention of a new research method for each publication of a recovered leather assemblage hindered the development of a single standard methodology. Chapter three covers the initial research with the database through which the primary cutting patterns were identified and the named styles were defined. The chronological span of each named style was established through iterative cross-site sedation and named style comparisons. The technical interpretation of the primary cutting patterns' consistent use is due to constraints imposed by the leather and the forms needed to cover the foot. Basic parts of the shoe patterns and the foot are defined, plus terms provided for identifying the key points for pattern making. Chapter four presents the seventeen primary cutting patterns and their sub-types, these are divided into three main groups: six integral soled patterns, four hybrid soled patterns and seven separately soled patterns. Descriptions of the letter codes, pattern layout, construction principle, closing seam placement and list of sub-types are included in the descriptions of each primary cutting pattern. The named shoe styles and their relative chronology are presented in chapter five. Nomenclature for the named styles is based on the find location of the first published example plus the primary cutting pattern code letter. The named styles are presented in chronological order from Prehistory through to the late 16th century. Short descriptions of the named styles are given and illustrated with examples of recovered archaeological leather footwear, reconstructions of archaeological shoes and iconographical sources. Chapter six presents documentation of recovered archaeological leather using the Goubitz notation, an inventory and description of style elements and fastening methods used for defining named shoe styles, technical information about sole/upper constructions and the consequences created by the use of lasts and sewing forms for style identification and fastening placement in relation to the instep point. The chapter concludes with further technical information about the implications for researchers about shoemaking, pattern making and reconstructive archaeology. The conclusion restates the original research question of why a group of primary cutting patterns appear to have been used consistently throughout the European archaeological record. The quantitative and qualitative results from the database show the use of these patterns but it is the properties of the leather that imposes the use of the primary cutting patterns. The combined methodology of primary pattern identification, named style and artefact registration provides a framework for calceological research.
Resumo:
We perform an experiment on a pure coordination game with uncertaintyabout the payoffs. Our game is closely related to models that have beenused in many macroeconomic and financial applications to solve problemsof equilibrium indeterminacy. In our experiment each subject receives anoisy signal about the true payoffs. This game has a unique strategyprofile that survives the iterative deletion of strictly dominatedstrategies (thus a unique Nash equilibrium). The equilibrium outcomecoincides, on average, with the risk-dominant equilibrium outcome ofthe underlying coordination game. The behavior of the subjects convergesto the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that subjects do not apply through"a priori" reasoning the iterated deletion of dominated strategies.Instead, they adapt to the responses of other players. Thus, the lengthof the learning phase clearly varies for the different signals. We alsotest behavior in a game without uncertainty as a benchmark case. The gamewith uncertainty is inspired by the "global" games of Carlsson and VanDamme (1993).
Resumo:
Este artigo avalia os produtos audiovisuais produzidos pela Televisão de Cabo Verde (TCV), no quadro da cultura cabo-verdiana enquanto processo de criação e de transformação dos valores culturais, relacionando-os com o factor ideológico- político. Baseando-se na teoria de acção comunicacional de Stuart Hall, explanado no surpreendente artigo Encoding /Decoding, aborda o modo como a cultura e a recorrente ideologia política condiciona a emergência de novos produtos audiovisuais, com discurso «significativo», no referido canal público de televisão.
Identification of optimal structural connectivity using functional connectivity and neural modeling.
Resumo:
The complex network dynamics that arise from the interaction of the brain's structural and functional architectures give rise to mental function. Theoretical models demonstrate that the structure-function relation is maximal when the global network dynamics operate at a critical point of state transition. In the present work, we used a dynamic mean-field neural model to fit empirical structural connectivity (SC) and functional connectivity (FC) data acquired in humans and macaques and developed a new iterative-fitting algorithm to optimize the SC matrix based on the FC matrix. A dramatic improvement of the fitting of the matrices was obtained with the addition of a small number of anatomical links, particularly cross-hemispheric connections, and reweighting of existing connections. We suggest that the notion of a critical working point, where the structure-function interplay is maximal, may provide a new way to link behavior and cognition, and a new perspective to understand recovery of function in clinical conditions.
Resumo:
Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.
Resumo:
In January 2006 the Centre Hospitalier Universitaire Vaudois (CHUV), a large university hospital in Lausanne, Switzerland, became the first hospital in Switzerland to allow assisted suicide (AS) in exceptional cases within its walls. However, euthanasia is illegal. This decision has posed several ethical and practical dilemmas for the hospital's palliative care consult service. To address these, the team embarked on a formal process of open dialogue amongst its members with the goal of identifying a collective response and position. This process involved meetings every 4 to 6 weeks over the course of 10 months. An iterative process unfolded. One of the principal dilemmas relates to finding a balance between the team's position against AS and the patient's autonomy and the institution's directive. Although all team members expressed opposition to AS, there were mixed opinions as to whether or not the team members should be present during the act if requested so by patients. Some thought this could be misinterpreted as complicity in the act and could send out mixed messages to the public and other health professionals about palliative care. Others felt that the team's commitment to nonabandonment obliged them to be present even if they did not provide the drug or give any advice or assistance. The implications of nonabandonment are explored, as are several other questions such as whether or not the teams are obliged to provide detailed information on AS when requested by patients.
Resumo:
Game theory states that iterative interactions between individuals are necessary to adjust behaviour optimally to one another. Although our understanding of the role of begging signals in the resolution of parent-offspring conflict over parental investment rests on game theory implying repeated interactions between family members, empiricists usually consider interactions at the exact moment when parents allocate food among the brood. Therefore, the mechanisms by which siblings adjust signalling level to one another remain unclear. We tackled this issue in the barn owl, Tyto alba. In the absence of parents, hungry nestlings signal vocally to siblings their intention to contest vigorously the next, indivisible, food item. Such behaviour deters siblings from competing and begging when parents return to the nest. In experimental two-chick broods, nestlings producing the longest calls in the absence of parents, a signal of hunger level, were more successful at monopolizing the food item at the first parental feeding visit of the night. Moreover, nestlings increased (versus decreased) call duration when their sibling produced longer (versus shorter) calls, and an individual was more likely to call again if its sibling began to vocalize before or just after it had ended its previous call. These results are in agreement with the hypothesis that siblings challenge each other vocally to reinforce the honesty of sib-sib communication and to resolve conflicts over which individual will have priority of access to the next delivered food item. Siblings challenge each other vocally to confirm that the level of signalling accurately reflects motivation.
Resumo:
Actualment un típic embedded system (ex. telèfon mòbil) requereix alta qualitat per portar a terme tasques com codificar/descodificar a temps real; han de consumir poc energia per funcionar hores o dies utilitzant bateries lleugeres; han de ser el suficientment flexibles per integrar múltiples aplicacions i estàndards en un sol aparell; han de ser dissenyats i verificats en un període de temps curt tot i l’augment de la complexitat. Els dissenyadors lluiten contra aquestes adversitats, que demanen noves innovacions en arquitectures i metodologies de disseny. Coarse-grained reconfigurable architectures (CGRAs) estan emergent com a candidats potencials per superar totes aquestes dificultats. Diferents tipus d’arquitectures han estat presentades en els últims anys. L’alta granularitat redueix molt el retard, l’àrea, el consum i el temps de configuració comparant amb les FPGAs. D’altra banda, en comparació amb els tradicionals processadors coarse-grained programables, els alts recursos computacionals els permet d’assolir un alt nivell de paral•lelisme i eficiència. No obstant, els CGRAs existents no estant sent aplicats principalment per les grans dificultats en la programació per arquitectures complexes. ADRES és una nova CGRA dissenyada per I’Interuniversity Micro-Electronics Center (IMEC). Combina un processador very-long instruction word (VLIW) i un coarse-grained array per tenir dues opcions diferents en un mateix dispositiu físic. Entre els seus avantatges destaquen l’alta qualitat, poca redundància en les comunicacions i la facilitat de programació. Finalment ADRES és un patró enlloc d’una arquitectura concreta. Amb l’ajuda del compilador DRESC (Dynamically Reconfigurable Embedded System Compile), és possible trobar millors arquitectures o arquitectures específiques segons l’aplicació. Aquest treball presenta la implementació d’un codificador MPEG-4 per l’ADRES. Mostra l’evolució del codi per obtenir una bona implementació per una arquitectura donada. També es presenten les característiques principals d’ADRES i el seu compilador (DRESC). Els objectius són de reduir al màxim el nombre de cicles (temps) per implementar el codificador de MPEG-4 i veure les diferents dificultats de treballar en l’entorn ADRES. Els resultats mostren que els cícles es redueixen en un 67% comparant el codi inicial i final en el mode VLIW i un 84% comparant el codi inicial en VLIW i el final en mode CGA.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.
Resumo:
The study of the thermal behavior of complex packages as multichip modules (MCM¿s) is usually carried out by measuring the so-called thermal impedance response, that is: the transient temperature after a power step. From the analysis of this signal, the thermal frequency response can be estimated, and consequently, compact thermal models may be extracted. We present a method to obtain an estimate of the time constant distribution underlying the observed transient. The method is based on an iterative deconvolution that produces an approximation to the time constant spectrum while preserving a convenient convolution form. This method is applied to the obtained thermal response of a microstructure as analyzed by finite element method as well as to the measured thermal response of a transistor array integrated circuit (IC) in a SMD package.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
An efficient method is developed for an iterative solution of the Poisson and Schro¿dinger equations, which allows systematic studies of the properties of the electron gas in linear deep-etched quantum wires. A much simpler two-dimensional (2D) approximation is developed that accurately reproduces the results of the 3D calculations. A 2D Thomas-Fermi approximation is then derived, and shown to give a good account of average properties. Further, we prove that an analytic form due to Shikin et al. is a good approximation to the electron density given by the self-consistent methods.