888 resultados para coded character set
Resumo:
Silver Code (SilC) was originally discovered in [1–4] for 2×2 multiple-input multiple-output (MIMO) transmission. It has non-vanishing minimum determinant 1/7, slightly lower than Golden code, but is fast-decodable, i.e., it allows reduced-complexity maximum likelihood decoding [5–7]. In this paper, we present a multidimensional trellis-coded modulation scheme for MIMO systems [11] based on set partitioning of the Silver Code, named Silver Space-Time Trellis Coded Modulation (SST-TCM). This lattice set partitioning is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding, while the branch metrics are computed by using a sphere-decoding algorithm. It is shown that the proposed SST-TCM performs very closely to the Golden Space-Time Trellis Coded Modulation (GST-TCM) scheme, yetwith a much reduced decoding complexity thanks to its fast-decoding property.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
A biplot, which is the multivariate generalization of the two-variable scatterplot, can be used to visualize the results of many multivariate techniques, especially those that are based on the singular value decomposition. We consider data sets consisting of continuous-scale measurements, their fuzzy coding and the biplots that visualize them, using a fuzzy version of multiple correspondence analysis. Of special interest is the way quality of fit of the biplot is measured, since it is well-known that regular (i.e., crisp) multiple correspondence analysis seriously under-estimates this measure. We show how the results of fuzzy multiple correspondence analysis can be defuzzified to obtain estimated values of the original data, and prove that this implies an orthogonal decomposition of variance. This permits a measure of fit to be calculated in the familiar form of a percentage of explained variance, which is directly comparable to the corresponding fit measure used in principal component analysis of the original data. The approach is motivated initially by its application to a simulated data set, showing how the fuzzy approach can lead to diagnosing nonlinear relationships, and finally it is applied to a real set of meteorological data.
Resumo:
Percutaneous cricothyroidotomy may be a lifesaving procedure for airway obstruction, which cannot be relieved by endotracheal intubation and can be performed with specially designed instruments. A new device, the "Quicktrach", was evaluated by an anatomical preparation, flow and resistance measurements, and puncture of the cricothyroid membrane in 55 corpses. The size of the parts of the instrument (needle, plastic cannula, depth gauge) in relation to the size of the larynx is adequate, thus there is little likelihood of perforation of the posterior wall of the larynx. Resistance of the plastic cannula is sufficiently low to allow for adequate ventilation. The duration of time until the cannula is positioned properly in the trachea is significantly shorter, when an incision prior to the puncture is done (83 +/- 88 seconds without incision versus 35 +/- 41 seconds with incision; mean +/- SD). The "Quicktrach" is easy to apply even by inexperienced persons. The incidence of damage to the larynx (lesions including fractures of the thyroid, cricoid and 1. tracheal cartilage in 18%; soft tissue injury in 9%) is relatively high, however considering the live saving character of the procedure these numbers appear to be acceptable. Technical problems which occur with the use of the device are discussed and suggestions for improvement are made.
Teaching Adolescents to Think and Act Responsibly Through Narrative Film-making: A Qualitative Study
Resumo:
The current qualitative study examined an adapted version of the psychoeducational program, Teaching Adolescents to Think and Act Responsibly: The EQUIP Approach (DiBiase, Gibbs, Potter, & Blount, 2012). The adapted version, referred to as the EQUIP – Narrative Filmmaking Program, was implemented as a means of character education. The purpose of this study was three-fold: 1) to examine how the EQUIP – Narrative Film-making Program influenced student’s thoughts, feelings, and behaviours; 2) to explore the students’ and the teacher’s perception of their experience with the program; and 3) to assess whether or not the integrated EQUIP – Narrative Film-making Program addressed the goals of Ontario’s character education initiative. Purposive sampling was used to select one typical Grade 9 Exploring Technologies class, consisting of 15 boys from a Catholic board of education in the southern Ontario region. The EQUIP – Narrative Film-making Program required students to create moral narrative films that first portrayed a set of self-centered cognitive distortions, with follow-up portrayals of behavioural modifications. Before, during, and after intervention questionnaires were administered to the students and teacher. The student questionnaires invited responses to a set of cognitive distortion vignettes. In addition, data was collected through student and teacher interviews, and researcher observation protocol reports. Initially the data was coded according to an a priori set of themes that were further analyzed according to emotion and values coding methods. The results indicated that while each student was unique in his thoughts, feelings, and behavioural responses to the cognitive distortion vignettes after completing the EQUIP program, the overall trends showed students had a more positive attitude, with a decreased proclivity for antisocial behaviour and self-serving cognitive distortion portrayed in the vignettes. Overall, the teacher and students’ learning experiences were mainly positive and the program met the learning expectations of Ontario’s character education initiative. Based on these results of the present study, it is recommended that the EQUIP – Narrative Film-making Program be further evaluated through quantitative research and longitudinal study.
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
We describe a general likelihood-based 'mixture model' for inferring phylogenetic trees from gene-sequence or other character-state data. The model accommodates cases in which different sites in the alignment evolve in qualitatively distinct ways, but does not require prior knowledge of these patterns or partitioning of the data. We call this qualitative variability in the pattern of evolution across sites "pattern-heterogeneity" to distinguish it from both a homogenous process of evolution and from one characterized principally by differences in rates of evolution. We present studies to show that the model correctly retrieves the signals of pattern-heterogeneity from simulated gene-sequence data, and we apply the method to protein-coding genes and to a ribosomal 12S data set. The mixture model outperforms conventional partitioning in both these data sets. We implement the mixture model such that it can simultaneously detect rate- and pattern-heterogeneity. The model simplifies to a homogeneous model or a rate- variability model as special cases, and therefore always performs at least as well as these two approaches, and often considerably improves upon them. We make the model available within a Bayesian Markov-chain Monte Carlo framework for phylogenetic inference, as an easy-to-use computer program.
Resumo:
Single crystal X-ray diffraction studies and solvent dependent H-1 NMR titrations reveal that a set of four tetrapeptides with general formula Boc-Xx(1)-Aib(2)-Yy(3)-Zz(4)-OMe, where Xx, Yy and Zz are coded L- amino acids, adopt equivalent conformations that can be described as overlapping double turn conformations stabilized by two 4 -> 1 intramolecular hydrogen bonds between Yy(3)-NH and Boc C=O and Zz(4)-NH and Xx(1)C=O. In the crystalline state, the double turn structures are packed in head-to-tail fashion through intermolecular hydrogen bonds to create supramolecular helical structures. Field emission scanning electron microscopic (FE-SEM) images of the tetrapeptides in the solid state reveal that they can form flat tape-like structures. The results establish that synthetic Aib containing supramolecular helices can form highly ordered self-aggregated amyloid plaque like human amylin.
Resumo:
The Character of Christian-Muslim Encounter is a Festschrift in honour of David Thomas, Professor of Christianity and Islam, and Nadir Dinshaw Professor of Inter Religious Relations, at the University of Birmingham, UK. The Editors have put together a collection of over 30 contributions from colleagues of Professor Thomas that commences with a biographical sketch and representative tribute provided by a former doctoral student, and comprises a series of wide-ranging academic papers arranged to broadly reflect three dimensions of David Thomas’ academic and professional work – studies in and of Islam; Christian-Muslim relations; the Church and interreligious engagement. These are set in the context of a focussed theme – the character of Christian-Muslim encounters – and cast within a broad chronological framework.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
In this paper, a novel approach for character recognition has been presented with the help of genetic operators which have evolved from biological genetics and help us to achieve highly accurate results. A genetic algorithm approach has been described in which the biological haploid chromosomes have been implemented using a single row bit pattern of 315 values which have been operated upon by various genetic operators. A set of characters are taken as an initial population from which various new generations of characters are generated with the help of selection, crossover and mutation. Variations of population of characters are evolved from which the fittest solution is found by subjecting the various populations to a new fitness function developed. The methodology works and reduces the dissimilarity coefficient found by the fitness function between the character to be recognized and members of the populations and on reaching threshold limit of the error found from dissimilarity, it recognizes the character. As the new population is being generated from the older population, traits are passed on from one generation to another. We present a methodology with the help of which we are able to achieve highly efficient character recognition.
Resumo:
Changes in the television industry with regards to the development of new media technologies are having a significant impact on audience engagement with television drama. This article explores how the internet is being used to extend audience engagement onto platforms other than the television set to the point where television drama should increasingly be reconsidered as trans-media drama. However audience engagement with the various elements of a trans-media drama text is complex. By exploring audience attitudes towards character in the British television series Spooks and its associated online games, this article argues that in an increasingly converged media landscape audiences transfer values between platforms. Consequently the audience's perception of control in relation to their engagement with a trans-media drama text such as Spooks becomes complicated with values associated with television proving key to their engagement with the same fictional world in the form of games.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.