902 resultados para Digital techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the contemporary concert (i.e. “classical”) saxophone literature has connections to compositional styles found in other genres like jazz, rock, or pop. Although improvisation exists as a dominant compositional device in jazz, improvisation as a performance technique is not confined to a single genre. This study looks at twelve concert saxophone pieces that are grouped into three primary categories of compositional techniques: 1) those containing unmeasured phrases, 2) those containing limited relation to improvisation but a close relationship to jazz styles, and 3) those containing jazz improvisation. In concert saxophone music, specific crossover pieces use the compositional technique of jazz improvisation. Four examples of such jazz works were composed by Dexter Morrill, Phil Woods, Bill Dobbins, and Ramon Ricker, all of which provide a foundation for this study. In addition, pieces containing varying degrees of unmeasured phrases are highlighted. As this dissertation project is based in performance, the twelve pieces were divided into three recitals that summarize a pedagogical sequence. Any concert saxophonist interested in developing jazz improvisational skills can use the pieces in this study as a method to progress toward the performance of pieces that merge jazz improvisation with the concert format. The three compositional techniques examined here will provide the performer with the necessary material to develop this individualized approach to improvisation. Specific compositional and performance techniques vary depending on the stylistic content: this study examines improvisation in the context of concert saxophone repertoire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ethnomathematical research, together with digital technologies (WebQuest) and Drama-in- Education (DiE) techniques, can create a fruitful learning environment in a mathematics classroom—a hybrid/third space—enabling increased student participation and higher levels of cognitive engagement. This article examines how ethnomathematical ideas processed within the experiential environment established by the Drama-in-Education techniques challenged students‘ conceptions of the nature of mathematics, the ways in which students engaged with mathematics learning using mind and body, and the ̳dialogue‘ that was developed between the Discourse situated in a particular practice and the classroom Discourse of mathematics teaching. The analysis focuses on an interdisciplinary project based on an ethnomathematical study of a designing tradition carried out by the researchers themselves, involving a search for informal mathematics and the connections with context and culture; 10th grade students in a public school in Athens were introduced to the mathematics content via an original WebQuest based on this previous ethnomathematical study; Geometry content was further introduced and mediated using the Drama-in-Education (DiE) techniques. Students contributed in an unfolding dialogue between formal and informal knowledge, renegotiating both mathematical concepts and their perception of mathematics as a discipline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing availability of large, detailed digital representations of the Earth’s surface demands the application of objective and quantitative analyses. Given recent advances in the understanding of the mechanisms of formation of linear bedform features from a range of environments, objective measurement of their wavelength, orientation, crest and trough positions, height and asymmetry is highly desirable. These parameters are also of use when determining observation-based parameters for use in many applications such as numerical modelling, surface classification and sediment transport pathway analysis. Here, we (i) adapt and extend extant techniques to provide a suite of semi-automatic tools which calculate crest orientation, wavelength, height, asymmetry direction and asymmetry ratios of bedforms, and then (ii) undertake sensitivity tests on synthetic data, increasingly complex seabeds and a very large-scale (39 000km2) aeolian dune system. The automated results are compared with traditional, manually derived,measurements at each stage. This new approach successfully analyses different types of topographic data (from aeolian and marine environments) from a range of sources, with tens of millions of data points being processed in a semi-automated and objective manner within minutes rather than hours or days. The results from these analyses show there is significant variability in all measurable parameters in what might otherwise be considered uniform bedform fields. For example, the dunes of the Rub’ al Khali on the Arabian peninsula are shown to exhibit deviations in dimensions from global trends. Morphological and dune asymmetry analysis of the Rub’ al Khali suggests parts of the sand sea may be adjusting to a changed wind regime from that during their formation 100 to 10 ka BP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A newly introduced inverse class-E power amplifier (PA) was designed, simulated, fabricated, and characterized. The PA operated at 2.26 GHz and delivered 20.4-dBm output power with peak drain efficiency (DE) of 65% and power gain of 12 dB. Broadband performance was achieved across a 300-Mitz bandwidth with DE of better than 50% and 1-dB output-power flatness. The concept of enhanced injection predistortion with a capability to selectively suppress unwanted sub-frequency components and hence suitable for memory effects minimization is described coupled with a new technique that facilitates an accurate measurement of the phase of the third-order intermodulation (IM3) products. A robust iterative computational algorithm proposed in this paper dispenses with the need for manual tuning of amplitude and phase of the IM3 injected signals as commonly employed in the previous publications. The constructed inverse class-E PA was subjected to a nonconstant envelope 16 quadrature amplitude modulation signal and was linearized using combined lookup table (LUT) and enhanced injection technique from which superior properties from each technique can be simultaneously adopted. The proposed method resulted in 0.7% measured error vector magnitude (in rms) and 34-dB adjacent channel leakage power ratio improvement, which was 10 dB better than that achieved using the LUT predistortion alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to investigate the occupational hazards within the tanning industry caused by contaminated dust. A qualitative assessment of the risk of human exposure to dust was made throughout a commercial Kenyan tannery. Using this information, high-risk points in the processing line were identified and dust sampling regimes developed. An optical set-up using microscopy and digital imaging techniques was used to determine dust particle numbers and size distributions. The results showed that chemical handling was the most hazardous (12 mg m(-3)). A Monte Carlo method was used to estimate the concentration of the dust in the air throughout the tannery during an 8 h working day. This showed that the high-risk area of the tannery was associated with mean concentrations of dust greater than the UK Statutory Instrument 2002 No. 2677. stipulated limits (exceeding 10 mg m(-3) (Inhalable dust limits) and 4 mg m(-3) (Respirable dust limits). This therefore has implications in terms of provision of personal protective equipment (PPE) to the tannery workers for the mitigation of occupational risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of fine grain pipelining techniques in the design of high performance Wave Digital Filters (WDFs) is described. It is shown that significant increases in the sampling rate of bit parallel circuits can be achieved using most significant bit (msb) first arithmetic. A novel VLSI architecture for implementing two-port adaptor circuits is described which embodies these ideas. The circuit in question is highly regular, uses msb first arithmetic and is implemented using simple carry-save adders. © 1992 Kluwer Academic Publishers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A systematic design methodology is described for the rapid derivation of VLSI architectures for implementing high performance recursive digital filters, particularly ones based on most significant digit (msd) first arithmetic. The method has been derived by undertaking theoretical investigations of msd first multiply-accumulate algorithms and by deriving important relationships governing the dependencies between circuit latency, levels of pipe-lining and the range and number representations of filter operands. The techniques described are general and can be applied to both bit parallel and bit serial circuits, including those based on on-line arithmetic. The method is illustrated by applying it to the design of a number of highly pipelined bit parallel IIR and wave digital filter circuits. It is shown that established architectures, which were previously designed using heuristic techniques, can be derived directly from the equations described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of fine-grain pipelining techniques in the design of high-performance wave digital filters (WDFs) is described. The problems of latency in feedback loops can be significantly reduced if computations are organized most significant, as opposed to least significant, bit first and if the results are fed back as soon as they are formed. The result is that chips can be designed which offer significantly higher sampling rates than otherwise can be obtained using conventional methods. How these concepts can be extended to the more challenging problem of WDFs is discussed. It is shown that significant increases in the sampling rate of bit-parallel circuits can be achieved using most significant bit first arithmetic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past decade had witnessed an unprecedented growth in the amount of available digital content, and its volume is expected to continue to grow the next few years. Unstructured text data generated from web and enterprise sources form a large fraction of such content. Many of these contain large volumes of reusable data such as solutions to frequently occurring problems, and general know-how that may be reused in appropriate contexts. In this work, we address issues around leveraging unstructured text data from sources as diverse as the web and the enterprise within the Case-based Reasoning framework. Case-based Reasoning (CBR) provides a framework and methodology for systematic reuse of historical knowledge that is available in the form of problemsolution
pairs, in solving new problems. Here, we consider possibilities of enhancing Textual CBR systems under three main themes: procurement, maintenance and retrieval. We adapt and build upon the stateof-the-art techniques from data mining and natural language processing in addressing various challenges therein. Under procurement, we investigate the problem of extracting cases (i.e., problem-solution pairs) from data sources such as incident/experience
reports. We develop case-base maintenance methods specifically tuned to text targeted towards retaining solutions such that the utility of the filtered case base in solving new problems is maximized. Further, we address the problem of query suggestions for textual case-bases and show that exploiting the problem-solution partition can enhance retrieval effectiveness by prioritizing more useful query suggestions. Additionally, we illustrate interpretable clustering as a tool to drill-down to domain specific text collections (since CBR systems are usually very domain specific) and develop techniques for improved similarity assessment in social media sources such as microblogs. Through extensive empirical evaluations, we illustrate the improvements that we are able to
achieve over the state-of-the-art methods for the respective tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Passive intermodulation (PIM) often limits the performance of communication systems with analog and digitally-modulated signals and especially of systems supporting multiple carriers. Since the origins of the apparently multiple physical sources of nonlinearity causing PIM are not fully understood, the behavioral models are frequently used to describe the process of PIM generation. In this paper a polynomial model of memoryless nonlinearity is deduced from PIM measurements of a microstrip line with distributed nonlinearity with two-tone CW signals. The analytical model of nonlinearity is incorporated in Keysight Technology’s ADS simulator to evaluate the metrics of signal fidelity in the receive band for analog and digitally-modulated signals. PIM-induced distortion and cross-band interference with modulated signals are compared to those with two-tone CW signals. It is shown that conventional metrics can be applied to quantify the effect of distributed nonlinearities on signal fidelity. It is found that the two-tone CW test provides a worst-case estimate of cross-band interference for two-carrier modulated signals whereas with a three-carrier signal PIM interference in the receive band is noticeably overestimated. The simulated constellation diagrams for QPSK signals demonstrate that PIM interference exhibits the distinctive signatures of correlated distortion and this indicates that there are opportunities for mitigating PIM interference and that PIM interference cannot be treated as noise. One of the interesting results is that PIM distortion on a transmission line results in asymmetrical regrowth of output PIM interference for modulated signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tese pretende contribuir para o estudo e análise dos factores relacionados com as técnicas de aquisição de imagens radiológicas digitais, a qualidade diagnóstica e a gestão da dose de radiação em sistema de radiologia digital. A metodologia encontra-se organizada em duas componentes. A componente observacional, baseada num desenho do estudo de natureza retrospectiva e transversal. Os dados recolhidos a partir de sistemas CR e DR permitiram a avaliação dos parâmetros técnicos de exposição utilizados em radiologia digital, a avaliação da dose absorvida e o índice de exposição no detector. No contexto desta classificação metodológica (retrospectiva e transversal), também foi possível desenvolver estudos da qualidade diagnóstica em sistemas digitais: estudos de observadores a partir de imagens arquivadas no sistema PACS. A componente experimental da tese baseou-se na realização de experiências em fantomas para avaliar a relação entre dose e qualidade de imagem. As experiências efectuadas permitiram caracterizar as propriedades físicas dos sistemas de radiologia digital, através da manipulação das variáveis relacionadas com os parâmetros de exposição e a avaliação da influência destas na dose e na qualidade da imagem. Utilizando um fantoma contrastedetalhe, fantomas antropomórficos e um fantoma de osso animal, foi possível objectivar medidas de quantificação da qualidade diagnóstica e medidas de detectabilidade de objectos. Da investigação efectuada, foi possível salientar algumas conclusões. As medidas quantitativas referentes à performance dos detectores são a base do processo de optimização, permitindo a medição e a determinação dos parâmetros físicos dos sistemas de radiologia digital. Os parâmetros de exposição utilizados na prática clínica mostram que a prática não está em conformidade com o referencial Europeu. Verifica-se a necessidade de avaliar, melhorar e implementar um padrão de referência para o processo de optimização, através de novos referenciais de boa prática ajustados aos sistemas digitais. Os parâmetros de exposição influenciam a dose no paciente, mas a percepção da qualidade de imagem digital não parece afectada com a variação da exposição. Os estudos que se realizaram envolvendo tanto imagens de fantomas como imagens de pacientes mostram que a sobreexposição é um risco potencial em radiologia digital. A avaliação da qualidade diagnóstica das imagens mostrou que com a variação da exposição não se observou degradação substancial da qualidade das imagens quando a redução de dose é efectuada. Propõe-se o estudo e a implementação de novos níveis de referência de diagnóstico ajustados aos sistemas de radiologia digital. Como contributo da tese, é proposto um modelo (STDI) para a optimização de sistemas de radiologia digital.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of real-time networks is under continuous improvement as a result of several trends in the digital world. However, these tendencies not only cause improvements, but also exacerbates a series of unideal aspects of real-time networks such as communication latency, jitter of the latency and packet drop rate. This Thesis focuses on the communication errors that appear on such realtime networks, from the point-of-view of automatic control. Specifically, it investigates the effects of packet drops in automatic control over fieldbuses, as well as the architectures and optimal techniques for their compensation. Firstly, a new approach to address the problems that rise in virtue of such packet drops, is proposed. This novel approach is based on the simultaneous transmission of several values in a single message. Such messages can be from sensor to controller, in which case they are comprised of several past sensor readings, or from controller to actuator in which case they are comprised of estimates of several future control values. A series of tests reveal the advantages of this approach. The above-explained approach is then expanded as to accommodate the techniques of contemporary optimal control. However, unlike the aforementioned approach, that deliberately does not send certain messages in order to make a more efficient use of network resources; in the second case, the techniques are used to reduce the effects of packet losses. After these two approaches that are based on data aggregation, it is also studied the optimal control in packet dropping fieldbuses, using generalized actuator output functions. This study ends with the development of a new optimal controller, as well as the function, among the generalized functions that dictate the actuator’s behaviour in the absence of a new control message, that leads to the optimal performance. The Thesis also presents a different line of research, related with the output oscillations that take place as a consequence of the use of classic co-design techniques of networked control. The proposed algorithm has the goal of allowing the execution of such classical co-design algorithms without causing an output oscillation that increases the value of the cost function. Such increases may, under certain circumstances, negate the advantages of the application of the classical co-design techniques. A yet another line of research, investigated algorithms, more efficient than contemporary ones, to generate task execution sequences that guarantee that at least a given number of activated jobs will be executed out of every set composed by a predetermined number of contiguous activations. This algorithm may, in the future, be applied to the generation of message transmission patterns in the above-mentioned techniques for the efficient use of network resources. The proposed task generation algorithm is better than its predecessors in the sense that it is capable of scheduling systems that cannot be scheduled by its predecessor algorithms. The Thesis also presents a mechanism that allows to perform multi-path routing in wireless sensor networks, while ensuring that no value will be counted in duplicate. Thereby, this technique improves the performance of wireless sensor networks, rendering them more suitable for control applications. As mentioned before, this Thesis is centered around techniques for the improvement of performance of distributed control systems in which several elements are connected through a fieldbus that may be subject to packet drops. The first three approaches are directly related to this topic, with the first two approaching the problem from an architectural standpoint, whereas the third one does so from more theoretical grounds. The fourth approach ensures that the approaches to this and similar problems that can be found in the literature that try to achieve goals similar to objectives of this Thesis, can do so without causing other problems that may invalidate the solutions in question. Then, the thesis presents an approach to the problem dealt with in it, which is centered in the efficient generation of the transmission patterns that are used in the aforementioned approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical networks are under constant evolution. The growing demand for dynamism require devices that can accommodate different types of traffic. Thus the study of transparent optical networks arises. This approach makes optical networks more "elegant" , due to a more efficient use of network resources. In this thesis, the author proposes devices that intend to form alternative approaches both in the state of art of these same technologies both in the fitting of this technologies in transparent optical networks. Given that full transparency is difficult to achieve with current technology (perhaps with more developed optical computing this is possible), the author proposes techniques with different levels of transparency. On the topic of performance of optical networks, the author proposes two techniques for monitoring chromatic dispersion with different levels of transparency. In Chapter 3 the proposed technique seems to make more sense for long-haul optical transmission links and high transmission rates, not only due to its moderate complexity but also to its potential moderate/high cost. However it is proposed to several modulation formats, particularly those that have a protruding clock component. In Chapter 4 the transparency level was not tested for various modulation formats, however some transparency is achieved by not adding any electrical device after the receiver (other than an analog-digital converter). This allows that this technique can operate at high transmission rates in excess of 100 Gbit / s, if electro-optical asynchronous sampling is used before the optical receiver. Thus a low cost and low bandwidth photo-detector can be used. In chapter 5 is demonstrated a technique for simultaneously monitoring multiple impairments of the optical network by generating novel performance analysis diagrams and by use of artificial neural networks. In chapter 6 the author demonstrates an all-optical technique for controlling the optical state of polarization and an example of how all-optical signal processing can fully cooperate with optical performance monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ARINC specification 653-2 describes the interface between application software and underlying middleware in a distributed real-time avionics system. The real-time workload in this system comprises of partitions, where each partition consists of one or more processes. Processes incur blocking and preemption overheads and can communicate with other processes in the system. In this work we develop compositional techniques for automated scheduling of such partitions and processes. At present, system designers manually schedule partitions based on interactions they have with the partition vendors. This approach is not only time consuming, but can also result in under utilization of resources. In contrast, the technique proposed in this paper is a principled approach for scheduling ARINC-653 partitions and therefore should facilitate system integration.