874 resultados para Photography -- Digital techniques
Resumo:
For pt. I see ibid., vol. 44, p. 927-36 (1997). In a digital communications system, data are transmitted from one location to another by mapping bit sequences to symbols, and symbols to sample functions of analog waveforms. The analog waveform passes through a bandlimited (possibly time-varying) analog channel, where the signal is distorted and noise is added. In a conventional system the analog sample functions sent through the channel are weighted sums of one or more sinusoids; in a chaotic communications system the sample functions are segments of chaotic waveforms. At the receiver, the symbol may be recovered by means of coherent detection, where all possible sample functions are known, or by noncoherent detection, where one or more characteristics of the sample functions are estimated. In a coherent receiver, synchronization is the most commonly used technique for recovering the sample functions from the received waveform. These sample functions are then used as reference signals for a correlator. Synchronization-based coherent receivers have advantages over noncoherent receivers in terms of noise performance, bandwidth efficiency (in narrow-band systems) and/or data rate (in chaotic systems). These advantages are lost if synchronization cannot be maintained, for example, under poor propagation conditions. In these circumstances, communication without synchronization may be preferable. The theory of conventional telecommunications is extended to chaotic communications, chaotic modulation techniques and receiver configurations are surveyed, and chaotic synchronization schemes are described
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Existing work in Computer Science and Electronic Engineering demonstrates that Digital Signal Processing techniques can effectively identify the presence of stress in the speech signal. These techniques use datasets containing real or actual stress samples i.e. real-life stress such as 911 calls and so on. Studies that use simulated or laboratory-induced stress have been less successful and inconsistent. Pervasive, ubiquitous computing is increasingly moving towards voice-activated and voice-controlled systems and devices. Speech recognition and speaker identification algorithms will have to improve and take emotional speech into account. Modelling the influence of stress on speech and voice is of interest to researchers from many different disciplines including security, telecommunications, psychology, speech science, forensics and Human Computer Interaction (HCI). The aim of this work is to assess the impact of moderate stress on the speech signal. In order to do this, a dataset of laboratory-induced stress is required. While attempting to build this dataset it became apparent that reliably inducing measurable stress in a controlled environment, when speech is a requirement, is a challenging task. This work focuses on the use of a variety of stressors to elicit a stress response during tasks that involve speech content. Biosignal analysis (commercial Brain Computer Interfaces, eye tracking and skin resistance) is used to verify and quantify the stress response, if any. This thesis explains the basis of the author’s hypotheses on the elicitation of affectively-toned speech and presents the results of several studies carried out throughout the PhD research period. These results show that the elicitation of stress, particularly the induction of affectively-toned speech, is not a simple matter and that many modulating factors influence the stress response process. A model is proposed to reflect the author’s hypothesis on the emotional response pathways relating to the elicitation of stress with a required speech content. Finally the author provides guidelines and recommendations for future research on speech under stress. Further research paths are identified and a roadmap for future research in this area is defined.
Resumo:
Gemstone Team F.I.T.N.E.S.S. (Fun Interactive Techniques for New Exercise and Sport Styles)
Resumo:
Much of the contemporary concert (i.e. “classical”) saxophone literature has connections to compositional styles found in other genres like jazz, rock, or pop. Although improvisation exists as a dominant compositional device in jazz, improvisation as a performance technique is not confined to a single genre. This study looks at twelve concert saxophone pieces that are grouped into three primary categories of compositional techniques: 1) those containing unmeasured phrases, 2) those containing limited relation to improvisation but a close relationship to jazz styles, and 3) those containing jazz improvisation. In concert saxophone music, specific crossover pieces use the compositional technique of jazz improvisation. Four examples of such jazz works were composed by Dexter Morrill, Phil Woods, Bill Dobbins, and Ramon Ricker, all of which provide a foundation for this study. In addition, pieces containing varying degrees of unmeasured phrases are highlighted. As this dissertation project is based in performance, the twelve pieces were divided into three recitals that summarize a pedagogical sequence. Any concert saxophonist interested in developing jazz improvisational skills can use the pieces in this study as a method to progress toward the performance of pieces that merge jazz improvisation with the concert format. The three compositional techniques examined here will provide the performer with the necessary material to develop this individualized approach to improvisation. Specific compositional and performance techniques vary depending on the stylistic content: this study examines improvisation in the context of concert saxophone repertoire.
Resumo:
Ethnomathematical research, together with digital technologies (WebQuest) and Drama-in- Education (DiE) techniques, can create a fruitful learning environment in a mathematics classroom—a hybrid/third space—enabling increased student participation and higher levels of cognitive engagement. This article examines how ethnomathematical ideas processed within the experiential environment established by the Drama-in-Education techniques challenged students‘ conceptions of the nature of mathematics, the ways in which students engaged with mathematics learning using mind and body, and the ̳dialogue‘ that was developed between the Discourse situated in a particular practice and the classroom Discourse of mathematics teaching. The analysis focuses on an interdisciplinary project based on an ethnomathematical study of a designing tradition carried out by the researchers themselves, involving a search for informal mathematics and the connections with context and culture; 10th grade students in a public school in Athens were introduced to the mathematics content via an original WebQuest based on this previous ethnomathematical study; Geometry content was further introduced and mediated using the Drama-in-Education (DiE) techniques. Students contributed in an unfolding dialogue between formal and informal knowledge, renegotiating both mathematical concepts and their perception of mathematics as a discipline.
Resumo:
The increasing availability of large, detailed digital representations of the Earth’s surface demands the application of objective and quantitative analyses. Given recent advances in the understanding of the mechanisms of formation of linear bedform features from a range of environments, objective measurement of their wavelength, orientation, crest and trough positions, height and asymmetry is highly desirable. These parameters are also of use when determining observation-based parameters for use in many applications such as numerical modelling, surface classification and sediment transport pathway analysis. Here, we (i) adapt and extend extant techniques to provide a suite of semi-automatic tools which calculate crest orientation, wavelength, height, asymmetry direction and asymmetry ratios of bedforms, and then (ii) undertake sensitivity tests on synthetic data, increasingly complex seabeds and a very large-scale (39 000km2) aeolian dune system. The automated results are compared with traditional, manually derived,measurements at each stage. This new approach successfully analyses different types of topographic data (from aeolian and marine environments) from a range of sources, with tens of millions of data points being processed in a semi-automated and objective manner within minutes rather than hours or days. The results from these analyses show there is significant variability in all measurable parameters in what might otherwise be considered uniform bedform fields. For example, the dunes of the Rub’ al Khali on the Arabian peninsula are shown to exhibit deviations in dimensions from global trends. Morphological and dune asymmetry analysis of the Rub’ al Khali suggests parts of the sand sea may be adjusting to a changed wind regime from that during their formation 100 to 10 ka BP.
Resumo:
The new pedagogical framework arisen since the Bologna Declaration,the Prague Communiquéand the introduction of the European Higher Education Area (EHEA), encourages, significantly, the use of new Communication and Information Technology to evolve teaching methodologies. The different ways teachers relate to learners have undergone a staggering change from which educational initiatives have emerged. Many of them are based on contents’ democratization through the use of ICT. The current article is intended to show the results obtained until the 2012/2013 academic course, since the implementation of the teaching innovation project entitled “The use of ICT for the students’ autonomous learning in the university education of the course Photography. Elaboration of a virtual classroom and results’ analysis related to the acquisition of skills and competencies” that has been developed in the course called Draw with light: Photography, belonging to the Fine Arts Degree at the University of Murcia.
Resumo:
A newly introduced inverse class-E power amplifier (PA) was designed, simulated, fabricated, and characterized. The PA operated at 2.26 GHz and delivered 20.4-dBm output power with peak drain efficiency (DE) of 65% and power gain of 12 dB. Broadband performance was achieved across a 300-Mitz bandwidth with DE of better than 50% and 1-dB output-power flatness. The concept of enhanced injection predistortion with a capability to selectively suppress unwanted sub-frequency components and hence suitable for memory effects minimization is described coupled with a new technique that facilitates an accurate measurement of the phase of the third-order intermodulation (IM3) products. A robust iterative computational algorithm proposed in this paper dispenses with the need for manual tuning of amplitude and phase of the IM3 injected signals as commonly employed in the previous publications. The constructed inverse class-E PA was subjected to a nonconstant envelope 16 quadrature amplitude modulation signal and was linearized using combined lookup table (LUT) and enhanced injection technique from which superior properties from each technique can be simultaneously adopted. The proposed method resulted in 0.7% measured error vector magnitude (in rms) and 34-dB adjacent channel leakage power ratio improvement, which was 10 dB better than that achieved using the LUT predistortion alone.
Resumo:
In this paper we report an empirical study of the photographic portrayal of family members at home. Adopting a social psychological approach and focusing oil intergenerational power dynamics, our research explores the use of domestic photo displays in family representation. Parents and their teenagers from eight families in the south of England were interviewed at home about their interpretations of both stored and displayed photos within the home. Discussions centred on particular photographs found by the participants to portray self and family in different ways. The findings show that public displays of digital photos are still curated by mothers of the households, but with more difficulty and less control all with analogue photos. In addition, teenagers both contribute and comply with this curation within the home, whilst at the same time developing additional ways of presenting their families and themselves online that are 'unsupervised' by the curator. We highlight the conflict of interest that is at play within teen and parent practices and consider the challenges that this presents for supporting the representation of family through the design of photo display technology. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this study was to investigate the occupational hazards within the tanning industry caused by contaminated dust. A qualitative assessment of the risk of human exposure to dust was made throughout a commercial Kenyan tannery. Using this information, high-risk points in the processing line were identified and dust sampling regimes developed. An optical set-up using microscopy and digital imaging techniques was used to determine dust particle numbers and size distributions. The results showed that chemical handling was the most hazardous (12 mg m(-3)). A Monte Carlo method was used to estimate the concentration of the dust in the air throughout the tannery during an 8 h working day. This showed that the high-risk area of the tannery was associated with mean concentrations of dust greater than the UK Statutory Instrument 2002 No. 2677. stipulated limits (exceeding 10 mg m(-3) (Inhalable dust limits) and 4 mg m(-3) (Respirable dust limits). This therefore has implications in terms of provision of personal protective equipment (PPE) to the tannery workers for the mitigation of occupational risk.
Resumo:
The application of fine grain pipelining techniques in the design of high performance Wave Digital Filters (WDFs) is described. It is shown that significant increases in the sampling rate of bit parallel circuits can be achieved using most significant bit (msb) first arithmetic. A novel VLSI architecture for implementing two-port adaptor circuits is described which embodies these ideas. The circuit in question is highly regular, uses msb first arithmetic and is implemented using simple carry-save adders. © 1992 Kluwer Academic Publishers.
Resumo:
A systematic design methodology is described for the rapid derivation of VLSI architectures for implementing high performance recursive digital filters, particularly ones based on most significant digit (msd) first arithmetic. The method has been derived by undertaking theoretical investigations of msd first multiply-accumulate algorithms and by deriving important relationships governing the dependencies between circuit latency, levels of pipe-lining and the range and number representations of filter operands. The techniques described are general and can be applied to both bit parallel and bit serial circuits, including those based on on-line arithmetic. The method is illustrated by applying it to the design of a number of highly pipelined bit parallel IIR and wave digital filter circuits. It is shown that established architectures, which were previously designed using heuristic techniques, can be derived directly from the equations described.
Resumo:
The application of fine-grain pipelining techniques in the design of high-performance wave digital filters (WDFs) is described. The problems of latency in feedback loops can be significantly reduced if computations are organized most significant, as opposed to least significant, bit first and if the results are fed back as soon as they are formed. The result is that chips can be designed which offer significantly higher sampling rates than otherwise can be obtained using conventional methods. How these concepts can be extended to the more challenging problem of WDFs is discussed. It is shown that significant increases in the sampling rate of bit-parallel circuits can be achieved using most significant bit first arithmetic.
Resumo:
This paper aims to describe the development of a 3D breast photography service managed by the Medical Illustration Department, in the Belfast Health and Social Care Trust, Northern Ireland. Dedicated 3D breast photography equipment was installed in Medical Illustration for 18 months. Women were referred for a variety of indications including pre- and post-surgical assessment. A dedicated 3D breast photography protocol was developed locally and this requires further refinement to allow reproducibility in other centres. There are image/data artefacts associated with this technology and special techniques are required to reduce these. Specialist software is necessary for clinicians and scientists to use 3D breast photography data in surgical planning and measurement of surgical outcome.