895 resultados para Fundamental techniques of localization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The oxidation of lipids has long been a topic of interest in biological and food sciences, and the fundamental principles of non-enzymatic free radical attack on phospholipids are well established, although questions about detail of the mechanisms remain. The number of end products that are formed following the initiation of phospholipid peroxidation is large, and is continually growing as new structures of oxidized phospholipids are elucidated. Common products are phospholipids with esterified isoprostane-like structures and chain-shortened products containing hydroxy, carbonyl or carboxylic acid groups; the carbonyl-containing compounds are reactive and readily form adducts with proteins and other biomolecules. Phospholipids can also be attacked by reactive nitrogen and chlorine species, further expanding the range of products to nitrated and chlorinated phospholipids. Key to understanding the mechanisms of oxidation is the development of advanced and sensitive technologies that enable structural elucidation. Tandem mass spectrometry has proved invaluable in this respect and is generally the method of choice for structural work. A number of studies have investigated whether individual oxidized phospholipid products occur in vivo, and mass spectrometry techniques have been instrumental in detecting a variety of oxidation products in biological samples such as atherosclerotic plaque material, brain tissue, intestinal tissue and plasma, although relatively few have achieved an absolute quantitative analysis. The levels of oxidized phospholipids in vivo is a critical question, as there is now substantial evidence that many of these compounds are bioactive and could contribute to pathology. The challenges for the future will be to adopt lipidomic approaches to map the profile of oxidized phospholipid formation in different biological conditions, and relate this to their effects in vivo. This article is part of a Special Issue entitled: Oxidized phospholipids-their properties and interactions with proteins.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Golfers, coaches and researchers alike, have all keyed in on golf putting as an important aspect of overall golf performance. Of the three principle putting tasks (green reading, alignment and the putting action phase), the putting action phase has attracted the most attention from coaches, players and researchers alike. This phase includes the alignment of the club with the ball, the swing, and ball contact. A significant amount of research in this area has focused on measuring golfer’s vision strategies with eye tracking equipment. Unfortunately this research suffers from a number of shortcomings, which limit its usefulness. The purpose of this thesis was to address some of these shortcomings. The primary objective of this thesis was to re-evaluate golfer’s putting vision strategies using binocular eye tracking equipment and to define a new, optimal putting vision strategy which was associated with both higher skill and success. In order to facilitate this research, bespoke computer software was developed and validated, and new gaze behaviour criteria were defined. Additionally, the effects of training (habitual) and competition conditions on the putting vision strategy were examined, as was the effect of ocular dominance. Finally, methods for improving golfer’s binocular vision strategies are discussed, and a clinical plan for the optometric management of the golfer’s vision is presented. The clinical management plan includes the correction of fundamental aspects of golfers’ vision, including monocular refractive errors and binocular vision defects, as well as enhancement of their putting vision strategy, with the overall aim of improving performance on the golf course. This research has been undertaken in order to gain a better understanding of the human visual system and how it relates to the sport performance of golfers specifically. Ultimately, the analysis techniques and methods developed are applicable to the assessment of visual performance in all sports.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internationalization of software as a previous step for localization is usually taken into account during early phases of the life-cycle of software development. However, the need to adapt software applications into different languages and cultural settings can appear once the application is finished and even in the market. In these cases, software localization implies a high cost of time and resources. This paper shows a real case of a existent software application, designed and developed without taking into account future necessities of localization, whose architecture and source code were modified to include the possibility of straightforward adaptation into new languages. The use of standard languages and advanced programming languages has permitted the authors to adapt the software in a simple and straightforward mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We make an comprehensive experimental and theoretical study of an effect of localization of light in photonic lattices realized in time domain with random optical potential. We show that localization occurs in whole range of disorder strength in full agreement with Anderson localization in 1D model. The disorder influence on modes structure is also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Internet has become an integral part of our nation’s critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a ‘distance metric’. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the explosive growth of the volume and complexity of document data (e.g., news, blogs, web pages), it has become a necessity to semantically understand documents and deliver meaningful information to users. Areas dealing with these problems are crossing data mining, information retrieval, and machine learning. For example, document clustering and summarization are two fundamental techniques for understanding document data and have attracted much attention in recent years. Given a collection of documents, document clustering aims to partition them into different groups to provide efficient document browsing and navigation mechanisms. One unrevealed area in document clustering is that how to generate meaningful interpretation for the each document cluster resulted from the clustering process. Document summarization is another effective technique for document understanding, which generates a summary by selecting sentences that deliver the major or topic-relevant information in the original documents. How to improve the automatic summarization performance and apply it to newly emerging problems are two valuable research directions. To assist people to capture the semantics of documents effectively and efficiently, the dissertation focuses on developing effective data mining and machine learning algorithms and systems for (1) integrating document clustering and summarization to obtain meaningful document clusters with summarized interpretation, (2) improving document summarization performance and building document understanding systems to solve real-world applications, and (3) summarizing the differences and evolution of multiple document sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Internet has become an integral part of our nation's critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a 'distance metric'. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work explores the idea of constitutional justice in Africa with a focus on constitutional interpretation in Ghana and Nigeria. The objective is to develop a theory of constitutional interpretation based upon a conception of law that allows the existing constitutions of Ghana and Nigeria to be construed by the courts as law in a manner that best serves the collective wellbeing of the people. The project involves an examination of both legal theory and substantive constitutional law. The theoretical argument will be applied to show how a proper understanding of the ideals of the rule of law and constitutionalism in Ghana and Nigeria necessitate the conclusion that socio-economic rights in those countries are constitutionally protected and judicially enforceable. The thesis argues that this conclusion follows from a general claim that constitutions should represent a ‘fundamental law’ and must be construed as an aspirational moral ideal for the common good of the people. The argument is essentially about the inherent character of ‘legality’ or the ‘rule of law.’ It weaves together ideas developed by Lon Fuller, Ronald Dworkin, T.R.S. Allan and David Dyzenhaus, as well as the strand of common law constitutionalism associated with Sir Edward Coke, to develop a moral sense of ‘law’ that transcends the confines of positive or explicit law while remaining inherently ‘legal’ as opposed to purely moral or political. What emerges is an unwritten fundamental law of reason located between pure morality or natural law on the one hand and strict, explicit, or positive law on the other. It is argued that this fundamental law is, or should be, the basis of constitutional interpretation, especially in transitional democracies like Ghana and Nigeria, and that it grounds constitutional protection for socio-economic rights. Equipped with this theory of law, courts in developing African countries like Ghana and Nigeria will be in a better position to contribute towards developing a real sense of constitutional justice for Africa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract : The structural build-up of fresh cement-based materials has a great impact on their structural performance after casting. Accordingly, the mixture design should be tailored to adapt the kinetics of build-up given the application on hand. The rate of structural build-up of cement-based suspensions at rest is a complex phenomenon affected by both physical and chemical structuration processes. The structuration kinetics are strongly dependent on the mixture’s composition, testing parameters, as well as the shear history. Accurate measurements of build-up rely on the efficiency of the applied pre-shear regime to achieve an initial well-dispersed state as well as the applied stress during the liquid-solid transition. Studying the physical and chemical mechanisms of build-up of cement suspensions at rest can enhance the fundamental understanding of this phenomenon. This can, therefore, allow a better control of the rheological and time-dependent properties of cement-based materials. The research focused on the use of dynamic rheology in investigating the kinetics of structural build-up of fresh cement pastes. The research program was conducted in three different phases. The first phase was devoted to evaluating the dispersing efficiency of various disruptive shear techniques. The investigated shearing profiles included rotational, oscillatory, and combination of both. The initial and final states of suspension’s structure, before and after disruption, were determined by applying a small-amplitude oscillatory shear (SAOS). The difference between the viscoelastic values before and after disruption was used to express the degree of dispersion. An efficient technique to disperse concentrated cement suspensions was developed. The second phase aimed to establish a rheometric approach to dissociate and monitor the individual physical and chemical mechanisms of build-up of cement paste. In this regard, the non-destructive dynamic rheometry was used to investigate the evolutions of both storage modulus and phase angle of inert calcium carbonate and cement suspensions. Two independent build-up indices were proposed. The structural build-up of various cement suspensions made with different cement contents, silica fume replacement percentages, and high-range water reducer dosages was evaluated using the proposed indices. These indices were then compared to the well-known thixotropic index (Athix.). Furthermore, the proposed indices were correlated to the decay in lateral pressure determined for various cement pastes cast in a pressure column. The proposed pre-shearing protocol and build-up indices (phases 1 and 2) were then used to investigate the effect of mixture’s parameters on the kinetics of structural build-up in phase 3. The investigated mixture’s parameters included cement content and fineness, alkali sulfate content, and temperature of cement suspension. Zeta potential, calorimetric, spectrometric measurements were performed to explore the corresponding microstructural changes in cement suspensions, such as inter-particle cohesion, rate of Brownian flocculation, and nucleation rate. A model linking the build-up indices and the microstructural characteristics was developed to predict the build-up behaviour of cement-based suspensions The obtained results showed that oscillatory shear may have a greater effect on dispersing concentrated cement suspension than the rotational shear. Furthermore, the increase in induced shear strain was found to enhance the breakdown of suspension’s structure until a critical point, after which thickening effects dominate. An effective dispersing method is then proposed. This consists of applying a rotational shear around the transitional value between the linear and non-linear variations of the apparent viscosity with shear rate, followed by an oscillatory shear at the crossover shear strain and high angular frequency of 100 rad/s. Investigating the evolutions of viscoelastic properties of inert calcite-based and cement suspensions and allowed establishing two independent build-up indices. The first one (the percolation time) can represent the rest time needed to form the elastic network. On the other hand, the second one (rigidification rate) can describe the increase in stress-bearing capacity of formed network due to cement hydration. In addition, results showed that combining the percolation time and the rigidification rate can provide deeper insight into the structuration process of cement suspensions. Furthermore, these indices were found to be well-correlated to the decay in the lateral pressure of cement suspensions. The variations of proposed build-up indices with mixture’s parameters showed that the percolation time is most likely controlled by the frequency of Brownian collisions, distance between dispersed particles, and intensity of cohesion between cement particles. On the other hand, a higher rigidification rate can be secured by increasing the number of contact points per unit volume of paste, nucleation rate of cement hydrates, and intensity of inter-particle cohesion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybrid halide perovskites have emerged as promising active constituents of next generation solution processable optoelectronic devices. During their assembling process, perovskite components undergo very complex dynamic equilibria starting in solution and progressing throughout film formation. Finding a methodology to control and affect these equilibria, responsible for the unique morphological diversity observed in perovskite films, constitutes a fundamental step towards a reproducible material processability. Here we propose the exploitation of polymer matrices as cooperative assembling components of novel perovskite CH3NH3PbI3 : polymer composites, in which the control of the chemical interactions in solution allows a predictable tuning of the final film morphology. We reveal that the nature of the interactions between perovskite precursors and polymer functional groups, probed by Nuclear Magnetic Resonance (NMR) spectroscopy and Dynamic Light Scattering (DLS) techniques, allows the control of aggregates in solution whose characteristics are strictly maintained in the solid film, and permits the formation of nanostructures that are inaccessible to conventional perovskite depositions. These results demonstrate how the fundamental chemistry of perovskite precursors in solution has a paramount influence on controlling and monitoring the final morphology of CH3NH3PbI3 (MAPbI3) thin films, foreseeing the possibility of designing perovskite : polymer composites targeting diverse optoelectronic applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Geometria Projetiva é o ramo da matemática que estuda as propriedades geométricas invariantes de uma projeção. Ela surge no século XVII da tentativa de compreender matematicamente as técnicas de desenho em perspectiva empregadas pelos artistas da Renascença. Por outro lado, a Geometria Descritiva também se utiliza de projeções para representar objetos tridimensional em um plano bidimensional. Desta forma, a Geometria Projetiva dialoga com o desenho artístico através das regras de perspectiva, e com o desenho técnico através da Geometria Descritiva. A partir das relações entre estes três campos do conhecimento, elaboramos uma proposta didática para o ensino da Geometria Projetiva a alunos do 9 ∘ ano do ensino fundamental. Este trabalho apresenta esta proposta e busca embasá-la matematicamente, relacionando-a aos principais fundamentos da Geometria Projetiva.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The selective solar absorber surface is a fundamental part of a solar thermal collector, as it is responsible for the solar radiation absorption and for reduction of radiation heat losses. The surface’s optical properties, the solar absorption (á) and the emittance (å), have great impact on the solar thermal collector efficiency. In this work, two coatings types were studied: coatings obtained by physical vapor deposition (PVDs) and coatings obtained by projection with different paints (PCs) on aluminum substrates. The most common industrial high performing solar selective absorbers are nowadays produced by vacuum deposition methods, showing some disadvantages, such as lower durability, lower resistance to corrosion, adhesion and scratch, higher cost and complex production techniques. Currently, spectrally selective paints are a potential alternative for absorbing surfaces in low temperature applications, with attractive features such as ease of processing, durability and commercial availability with low cost. Solar absorber surfaces were submitted to accelerated ageing tests, specified in ISO 22975-3. This standard is applicable to the evaluation of the long term behavior and service life of selective solar absorbers for solar collectors working under typical domestic hot water system conditions. The studied coatings have, in the case of PVDs solar absorptions between 0.93 and 0.96 and emittance between 0.07 and 0.10, and in the case of PCs, solar absorptions between 0.91 and 0.93 and emittance between 0.40 and 0.60. In addition to evaluating long term behavior based on artificial ageing tests, it is also important to know the degradation mechanism of different coatings that are currently in the market. Electrochemical impedance spectroscopy (EIS) allows for the assessment of mechanistic information concerning the degradation processes, providing quantitative data as output, which can easily relate to the kinetic parameters of the system. EIS measures were carried out on Gamry FAS2 Femostat coupled with a PCL4 Controller. Two electrolytes were used, 0.5 M NaCl and 0.5 M Na2SO4, and the surfaces were tested at different immersion times up to 4 weeks. The following types of specimens have been tested: Aluminium with/without surface treatment, 3 selective paint coatings (one with a poly(urethane) binder and two with silicone binders) and 2 PVD coatings. Based on the behaviour of the specimens throughout the 4 weeks of immersion, it is possible to conclude that the coating showing the best protective properties corresponds to the selective paint coating with a polyurethane resin followed by the other paint coatings, whereas both the PVD coatings do not confer any protection to the substrate, having a deleterious effect as compared to the untreated aluminium reference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to grow ultrathin films layer-by-layer with well-defined epitaxial relationships has allowed research groups worldwide to grow a range of artificial films and superlattices, first for semiconductors, and now with oxides. In the oxides thin film research community, there have been concerted efforts recently to develop a number of epitaxial oxide systems grown on single crystal oxide substrates that display a wide variety of novel interfacial functionality, such as enhanced ferromagnetic ordering, increased charge carrier density, increased optical absorption, etc, at interfaces. The magnitude of these novel properties is dependent upon the structure of thin films, especially interface sharpness, intermixing, defects, and strain, layering sequence in the case of superlattices and the density of interfaces relative to the film thicknesses. To understand the relationship between the interfacial thin film oxide atomic structure and its properties, atomic scale characterization is required. Transmission electron microscopy (TEM) offers the ability to study interfaces of films at high resolution. Scanning transmission electron microscopy (STEM) allows for real space imaging of materials with directly interpretable atomic number contrast. Electron energy loss spectroscopy (EELS), together with STEM, can probe the local chemical composition as well as local electronic states of transition metals and oxygen. Both techniques have been significantly improved by aberration correctors, which reduce the probe size to 1 Å, or less. Aberration correctors have thus made it possible to resolve individual atomic columns, and possibly probe the electronic structure at atomic scales. Separately, using electron probe forming lenses, structural information such as the crystal structure, strain, lattice mismatches, and superlattice ordering can be measured by nanoarea electron diffraction (NED). The combination of STEM, EELS, and NED techniques allows us to gain a fundamental understanding of the properties of oxide superlattices and ultrathin films and their relationship with the corresponding atomic and electronic structure. In this dissertation, I use the aforementioned electron microscopy techniques to investigate several oxide superlattice and ultrathin film systems. The major findings are summarized below. These results were obtained with stringent specimen preparation methods that I developed for high resolution studies, which are described in Chapter 2. The essential materials background and description of electron microscopy techniques are given in Chapter 1 and 2. In a LaMnO3-SrMnO3 superlattice, we demonstrate the interface of LaMnO3-SrMnO3 is sharper than the SrMnO3-LaMnO3 interface. Extra spectral weights in EELS are confined to the sharp interface, whereas at the rougher interface, the extra states are either not present or are not confined to the interface. Both the structural and electronic asymmetries correspond to asymmetric magnetic ordering at low temperature. In a short period LaMnO3-SrTiO3 superlattice for optical applications, we discovered a modified band structure in SrTiO3 ultrathin films relative to thick films and a SrTiO3 substrate, due to charge leakage from LaMnO3 in SrTiO3. This was measured by chemical shifts of the Ti L and O K edges using atomic scale EELS. The interfacial sharpness of LaAlO3 films grown on SrTiO3 was investigated by the STEM/EELS technique together with electron diffraction. This interface, when prepared under specific conditions, is conductive with high carrier mobility. Several suggestions for the conductive interface have been proposed, including a polar catastrophe model, where a large built-in electric field in LaAlO3 films results in electron charge transfer into the SrTiO3 substrate. Other suggested possibilities include oxygen vacancies at the interface and/or oxygen vacancies in the substrate. The abruptness of the interface as well as extent of intermixing has not been thoroughly investigated at high resolution, even though this can strongly influence the electrical transport properties. We found clear evidence for cation intermixing through the LaAlO3-SrTiO3 interface with high spatial resolution EELS and STEM, which contributes to the conduction at the interface. We also found structural defects, such as misfit dislocations, which leads to increased intermixing over coherent interfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electric bass and double bass are two different instruments sharing a common function: they link harmony with rhythm, especially when talking about jazz music. The capacity of a bassist to fully support an ensemble is something that can be achieved individually playing electric or double bass. However there are some bassists who, despite of the technical differences between these two instruments, choose to play both. Some of these performers are true masters using and switching electric and double bass according to the different musical settings. It is possible to define similarities and differences between the electric and double bass, but is it viable to use similar approaches too? In order to investigate this field, I focus my research on one exemplar player who combines all the qualities needed to both play electric than double bass: John Patitucci, an inspiration for bassists of all generations and a musician who synthesizes all the fundamental characteristics of an ideal bass player. This dissertation is inspired by Patitucci’s example and by the urge to fill a gap in the specialized literature concerning the history and application of different left and right hand techniques on the electric and double bass. The main purpose of this study is to create the backbone of a bass program for teaching both instruments using John Patitucci as example. His technical approach on both instruments and his soloing vocabulary are points of departure of this dissertation. I begin my study with the historical origins of Patitucci’s techniques ending with the development of exercises created in order to teach his techniques and vocabulary to those who aspire to play electric and double bass; RESUMO: Baixo elétrico e contrabaixo, dois instrumentos distintos que partilham uma função comum: a possibilidade de produzir um conjunto de notas capazes de interligar uma grelha harmonia a uma base rítmica, criando uma coesão estética e musical, sobretudo na música jazz. A capacidade de um baixista de conseguir alcançar de forma eficiente esta ligação como sólido suporte para um “ensemble” musical está na base de uma sua eventual afirmação profissional. Há músicos que apesar das diferencias técnicas entre estes dois instrumentos, decidiram tocar ambos; alguns deles conseguiram destacarse, usando e trocando o baixo elétrico e o contrabaixo para servir melhor diferentes situações musicais. O contrabaixo e baixo elétrico têm características em comum mas ao mesmo tempo diferem por apresentar algumas diferenças técnica substanciais; será por isso possível abordar, explorar e aprender ambos utilizando uma mesma base metodológica? Com o intuito de explorar esta possibilidade direcionei a minha pesquisa para o estudo de um músico que no curso da sua longa carreira consegui grande destaque em quanto baixista elétrico e contrabaixista. John Patitucci é a síntese desta tipologia de músico, sendo uma fonte de inspiração para baixistas de todas as gerações. Esta dissertação é inspirada no seu exemplo e no desejo de colmatar o vazio presente na literatura musical comum aos dois instrumentos sobre a história e aplicação das técnicas da mão esquerda e direita. O foco principal é a criação de uma base sólida para o futuro desenvolvimento de um programa de ensino comum para o baixo eléctrico e o contrabaixo, utilizando o vocabulário técnico e improvisativo de Patitucci como ponto de partida. A dissertação aborda as origens históricas das técnicas utilizadas por Patitucci desenvolvendo, numa fase sucessiva, exercícios criados com a função de ensinar as suas técnicas aos que desejarem aprofundar a prática do baixo elétrico e do contrabaixo.