949 resultados para Electromagnetic bandgap


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We look for minimal chiral sets of fermions beyond the standard model that are anomaly free and, simultaneously, vectorlike particles with respect to color SU(3) and electromagnetic U(1). We then study whether the addition of such particles to the standard model particle content allows for the unification of gauge couplings at a high energy scale, above 5.0 x 10(15) GeV so as to be safely consistent with proton decay bounds. The possibility to have unification at the string scale is also considered. Inspired in grand unified theories, we also search for minimal chiral fermion sets that belong to SU(5) multiplets, restricted to representations up to dimension 50. It is shown that, in various cases, it is possible to achieve gauge unification provided that some of the extra fermions decouple at relatively high intermediate scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A swift chemical route to synthesize Co-doped SnO2 nanopowders is described. Pure and highly stable Sn1-xCoxO2-delta (0 <= x <= 0.15) crystalline nanoparticles were synthesized, with mean grain sizes <5 nm and the dopant element homogeneously distributed in the SnO2 matrix. The UV-visible diffuse reflectance spectra of the Sn1-xCoxO2-delta samples reveal red shifts, the optical bandgap energies decreasing with increasing Co concentration. The samples' Urbach energies were calculated and correlated with their bandgap energies. The photocatalytic activity of the Sn1-xCoxO2-delta samples was investigated for the 4-hydroxylbenzoic acid (4-HBA) degradation process. A complete photodegradation of a 10 ppm 4-HBA solution was achieved using 0.02% (w/w) of Sn0.95Co0.05O2-delta nanoparticles in 60 min of irradiation. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the modeling efforts on antenna design and frequency selection to monitor brain temperature during prolonged surgery using noninvasive microwave radiometry. A tapered log-spiral antenna design is chosen for its wideband characteristics that allow higher power collection from deep brain. Parametric analysis with the software HFSS is used to optimize antenna performance for deep brain temperature sensing. Radiometric antenna efficiency (eta) is evaluated in terms of the ratio of power collected from brain to total power received by the antenna. Anatomical information extracted from several adult computed tomography scans is used to establish design parameters for constructing an accurate layered 3-D tissue phantom. This head phantom includes separate brain and scalp regions, with tissue equivalent liquids circulating at independent temperatures on either side of an intact skull. The optimized frequency band is 1.1-1.6 GHz producing an average antenna efficiency of 50.3% from a two turn log-spiral antenna. The entire sensor package is contained in a lightweight and low-profile 2.8 cm diameter by 1.5 cm high assembly that can be held in place over the skin with an electromagnetic interference shielding adhesive patch. The calculated radiometric equivalent brain temperature tracks within 0.4 degrees C of the measured brain phantom temperature when the brain phantom is lowered 10. C and then returned to the original temperature (37 degrees C) over a 4.6-h experiment. The numerical and experimental results demonstrate that the optimized 2.5-cm log-spiral antenna is well suited for the noninvasive radiometric sensing of deep brain temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To increase the amount of logic available to the users in SRAM-based FPGAs, manufacturers are using nanometric technologies to boost logic density and reduce costs, making its use more attractive. However, these technological improvements also make FPGAs particularly vulnerable to configuration memory bit-flips caused by power fluctuations, strong electromagnetic fields and radiation. This issue is particularly sensitive because of the increasing amount of configuration memory cells needed to define their functionality. A short survey of the most recent publications is presented to support the options assumed during the definition of a framework for implementing circuits immune to bit-flips induction mechanisms in memory cells, based on a customized redundant infrastructure and on a detection-and-fix controller.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Maxwell equations constitute a formalism for the development of models describing electromagnetic phenomena. The four Maxwell laws have been adopted successfully in many applications and involve only the integer order differential calculus. Recently, a closer look for the cases of transmission lines, electrical motors and transformers, that reveal the so-called skin effect, motivated a new perspective towards the replacement of classical models by fractional-order mathematical descriptions. Bearing these facts in mind this paper addresses the concept of static fractional electric potential. The fractional potential was suggested some years ago. However, the idea was not fully explored and practical methods of implementation were not proposed. In this line of thought, this paper develops a new approximation algorithm for establishing the fractional order electrical potential and analyzes its characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Mecânica

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O transporte ferroviário é um meio de transporte em que o meio de deslocamento ocorre por meio de vias férreas, transportando, entre outros, pessoas e cargas. Este meio de transporte é um dos mais antigos e a sua origem está ligada directamente com a Primeira Revolução Industrial, acontecimento histórico que sucedeu na Europa no final do século XVIII e início do século XIX. Uma rede ferroviária é um sistema único no ponto de vista do uso de tração elétrica assim como no modo que se insere na sociedade por ser um meio de transporte seguro, rápido e bastante utilizado pela população. As redes de alimentação de energia (transporte e distribuição) e a rede de alta velocidade ditaram novas soluções para a alimentação elétrica ferroviária contribuindo para a sua evolução técnica, na segurança e também na compatibilidade eletromagnética no sentido de se estabelecerem critérios de controlo e prevenção dos efeitos indesejáveis provocados pela interferência magnética. O presente trabalho tem por objetivo analisar e estudar tecnicamente como se comportam as redes que alimentam os veículos de tração elétrica desde as subestações até à alimentação das locomotivas. Dada a complexidade da sua análise torna-se necessário o recurso a ferramentas de simulação mais ou menos complexas. No presente trabalho recorreu-se ao MATLABTM, nomeadamente, ao MATLABTM/Simulink. Foram analisadas as principais grandezas elétricas em cenários distintos para os sistemas de alimentação da catenária de 1x25 kV e 2x25 kV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The internal impedance of a wire is the function of the frequency. In a conductor, where the conductivity is sufficiently high, the displacement current density can be neglected. In this case, the conduction current density is given by the product of the electric field and the conductance. One of the aspects the high-frequency effects is the skin effect (SE). The fundamental problem with SE is it attenuates the higher frequency components of a signal. The SE was first verified by Kelvin in 1887. Since then many researchers developed work on the subject and presently a comprehensive physical model, based on the Maxwell equations, is well established. The Maxwell formalism plays a fundamental role in the electromagnetic theory. These equations lead to the derivation of mathematical descriptions useful in many applications in physics and engineering. Maxwell is generally regarded as the 19th century scientist who had the greatest influence on 20th century physics, making contributions to the fundamental models of nature. The Maxwell equations involve only the integer-order calculus and, therefore, it is natural that the resulting classical models adopted in electrical engineering reflect this perspective. Recently, a closer look of some phenomas present in electrical systems and the motivation towards the development of precise models, seem to point out the requirement for a fractional calculus approach. Bearing these ideas in mind, in this study we address the SE and we re-evaluate the results demonstrating its fractional-order nature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A levitação magnética tem sido um tema bastante investigado sobretudo devido à sua utilização em sistemas ferroviários de transportes. É o método ideal quando existe a necessidade em aplicações de restringir do contacto físico, ou a conveniência, em termos energéticos, de eliminar o atrito. O princípio de funcionamento é simples, um eletroíman cria uma força sobre um objeto ferromagnético que contraria a gravidade. Contudo um sistema de levitação por atração é instável e não linear, o que significa a necessidade de implementar um controlador para satisfazer as características de estabilidade desejadas. Ao longo deste projeto serão descritos os procedimentos teóricos e práticos que foram tomados na criação de um sistema de levitação eletromagnética. Desde a conceção física do sistema, como escolha do sensor, condicionamento de sinal ou construção do eletroíman, até aos procedimentos matemáticos que permitiram a modelação do sistema e criação de controladores. Os controladores clássicos, como o PID ou em avanço de fase, foram projetados através da técnica do Lugar Geométrico de Raízes. No projeto do controlador difuso, pelo contrário não se fez uso da modelação do sistema ou de qualquer relação matemática entre as variáveis. A utilização desta técnica de controlo destacou-se pela usa simplicidade e rapidez de implementação, fornecendo um bom desempenho ao sistema. Na parte final do relatório os resultados obtidos pelos diferentes métodos de controlo são analisados e apresentadas as respetivas conclusões. Estes resultados revelam que para este sistema, relativamente aos outros métodos, o controlador difuso apresenta o melhor desempenho tanto ao nível da resposta transitória, como em regime permanente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As células foto voltaicas orgânicas ou células de Gräetzel (depois do seu descobridor) são aparelhos para a colecta de energia solar que utilizam um semicondutor inorgânico e uma molécula orgânica. Dita molécula orgânica é capaz de excitar-se na presença de radiação electromagnética e ceder esta energia através da doação de electrões a este semicondutor. Embora estas estruturas e o seu processo de fabrico sejam relativamente pouco onerosas, o aproveitamento da energia solar é ainda muito baixo. Para além desta deficiência, os corantes sintéticos sofrem de “bleaching” ou então são reduzidos ou oxidados facilmente quando não conseguem transferir a energia que foi absorvida ou quando é difícil voltar ao estado original por dificuldades no completamento de circulação de electrões. Neste trabalho pretende-se então estudar o comportamento de moléculas e misturas complexas de moléculas com capacidade para serem excitadas pela luz solar. Como a dita xcitação promove a transferência de um electrão, este processo será seguido pela técnica de Voltametria cíclica. Como substâncias absorventes de luz utilizaremos compostos naturais (principalmente flavonóides) puros, ou então na forma de complexos naturais extraídos de algumas plantas. Estas misturas de corantes serão extractos aquosos (infusões) de casca de laranja e limão assim como extractos de folhas de cerejeira, com o objectivo de proporcionar lternativas aos flavonóides utilizados neste estudo. A caracterização voltamétrica desta célula é feita em diferentes formas de iluminação. Sobre a célula assim formada faz-se incidir rimeiro luz de lâmpadas fluorescentes, depois luz ultra violeta e por fim sem qualquer tipo de luz incidente. Na base do fabrico da variante mais clássica destas células está o semicondutor óxido de itânio (TiO2), por ser uma substância muito comum e barata e com propriedades semicondutoras notáveis. Uma forma comum de melhorar a eficiência deste material é introduzir dopantes com o intuito de melhorar a eficiência do processo de transferência electrónica. Um segundo objectivo deste trabalho é o estudo de sistemas semicondutor/molécula foto activa. Semicondutores como ZnO, TiO2 e TiO2 dopado serão então estudados. O gels de TiO2 ou o TiO2 dopado serão depositados sobre lâminas de vidro comum, nas quais foi anteriormente depositado uma película de alumínio que serve de condutor (eléctrodo egativo). Uma outra variante será a utilização de óxido de zinco, um semicondutor de baixo custo que por sua vez vai ser depositado em lâminas de alumínio comercial. A nossa célula foto electroquímica será então formada por moléculas de corante, uma lâmina e um semicondutor (que funcionará como eléctrodo de trabalho), com ou sem electrólito/catalizador (solução de iodo/iodeto), e eléctrodos de referência de Ag/AgCl, e outro auxiliar de grafite. Um outro objectivo é fazer um pequeno estudo sobre influencia do catalisador I2/etilenodiamina no comportamento electroquímico da célula, de forma a poder utilizar o solvente (etilenodiamina) com menor volatilidade do que a água, que é empregada no par I2/I3.m A importância deste facto prende-se com a limitada vida destas células quando o electrólito/solvente é evaporado pelas altas temperaturas da radiação incidente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RESUMO: Introdução: A relação cinemática entre as articulações do CAO apresenta grande importância na função do MS, e é por isso cada vez mais investigada e descrita. O posicionamento da omoplata ganha um importante papel para compreender as DCAO. É indiscutível o importante papel da omoplata na dinâmica do MS, bem como o posicionamento escapular como parâmetro clínico de disfunção do CAO. Todos estes factores criam a necessidade de desenvolver instrumentos de avaliação da posição da articulação ET. A grande maioria dos testes de avaliação, restringem a sua avaliação às disfunções da articulação GU, não integrando uma avaliação mais dinâmica e interactiva que respeite os pressupostos teóricos inerentes ao REU. È importante também que os métodos de avaliação sejam de fácil aplicabilidade clínica e que avaliem fidedignamente e com validade os outcomes. Objectivos: Contribuir para o desenvolvimento de uma metodologia de avaliação da omoplata em diferentes amplitudes do MS, através do estudo da validade concorrente, da fidedignidade intra e inter-observador. Metodologia: A amostra foi constituída por 20 elementos seleccionados por conveniência, entre o corpo de discentes da ESS-IPS, sem história de disfunção do CAO. Foi realizada uma análise cinemática ao MS de cada sujeito, usando um aparelho de análise por varrimento electromagnético, o FOB. Em cada sujeito foram ainda medidas, as distâncias escapulares em estudo, usando a fita métrica. Cada método de medição foi constituído por dois momento (teste e reteste), em cada momento, as medidas eram recolhidas por dois investigadores distintos. Resultados e Discussão: Foram considerados como positivos os resultados que se apresentassem acima do limiar de 0,5, que classifica uma correlação como moderada a excelente. Os resultados da validade mostram que para o investigador 1 nas medidas M1e M2 apenas houve correlação com valores excelentes a moderados até aos 30º de elevação do MS. Já para M3 apenas aos 30º no plano da omoplata essa mesma correlação não tem valores próximos do cutpoint. Em M4 nenhum valor tem correlação significativa com os valores do FOB, chegando mesmo a haver correlação negativa para os 120º no plano da omoplata. Em M5 apenas os 0º mostram valores correlacionais excelentes a moderados. Para o investigador 2, em M1 e M2 á semelhança do investigador 1, só existem valores de correlação significativos até aos 30º de elevação do MS. Já para M3 todos os valores mostram excelente a moderada correlação á excepção dos 120º no plano frontal. Em M4 este investigador apresenta maus resultados. Já em M5 os valores de correlação são moderados aos 0º e aos 90º. No que diz respeitos aos resultados dos CCI intra-observador,podemos afirmar que foi em M5 que estes valores mais se aproximaram do cut point. M1 e M2 são as medidas onde se encontram resultados menos satisfatórios. É aos 60º que existem valores mais satisfatórios, seguidos pelos 0º e 30º, quando nos aproximamos de graus mais elevados, como 90º e 120º, estes valores tendem a baixar. Quanto á fidedignidade inter-observador para M1 apenas aos 90º e aos 120º houve valores de correlação abaixo do cutpoint. Em M2, só os 60º do plano sagital não teve valores acima do cutpoint, em M3 apenas os 30º plano sagital não obtiveram valores acima do cut-point, o mesmo acontece para M3 aos 30º plano da omoplata e em M5 aos 30º e 120º no plano frontal. Conclusão: Os resultados deste estudo indicam que a metodologia em causa apresenta elevado grau de fidedignidade inter-observador, já no que toca é fidedignidade intra-observador o grau de semelhança não é tão elevado. Também o erro associado á medida não ultrapassou 1,5cm, sendo considerado baixo. Na validade concorrente concluímos que é aos 0º que a metodologia se torna uma opção válida na aferição das distâncias medidas com uma boa a excelente concordância com o FOB. As medidas consideradas como opções válidas, nas diferentes amplitudes, podem funcionar como parâmetros clínicos de caracterização do posicionamento da omoplata, podendo vir a contribuir ainda para a caracterização da orientação da mesma.-------------------- ABSTRACT:Introduction: The kinematic relationship between the joints of the CAO has great importance in the function of MS, which is why more and more investigated and described. The positioning of the blade gets an important role in understanding the DCAO. There is no doubt the important role of the scapula in the dynamics of MS as well as the positioning of scapular dysfunction as a clinical parameter of the CAO. All these factors create the need to develop tools for evaluating the place of articulation ET. Most assessment tests, restrict its assessment to GU joint disorders, not incorporating a more dynamic and interactive way that respects the theoretical assumptions inherent in the REU. It is also important that the methods are easy to apply clinical and reliably to assess the validity and outcomes. Objectives: To contribute to the development of a methodology for evaluating the scapula in different ranges of MS, through the study of concurrent validity, reliability of intra-and inter-observer. Methodology: The sample consisted of 20 selected elements for convenience, between the body of students of IPS-ESS with no history of dysfunction of the CAO. We performed a kinematic analysis of each subject to MS, using a scanning device for electromagnetic analysis, the FOB. In each subject were also measured, the scapular distances under study, using tape measure. Each method of measuring the time consisted of two (test and retest), in each moment, the measures were collected by two different investigators. Results and Discussion: We considered as positive results that were above the threshold of 0.5, which ranks as a moderate to excellent correlation. The results show that the validity for the researcher in an action M1e M2 was only correlated with moderate to excellent values up to 30º of elevation of the MS. As for M3 only to 30º in the plane of the scapula has the same correlation values near the cutpoint. M4 has no value in correlation with the values of the FOB, and even negative correlation to 120 ° in the plane of the scapula. In M5 show only the values 0 ° correlational excellent to moderate. For investigator 2 in M1 and M2 will be like an investigator, there are only significant correlation values up to 30º of elevation of the MS. As for M3 all the values show excellent correlation to moderate with the exception of 120 ° in the frontal plane. In this researcher M4 has bad results. M5 already in the correlation values are moderate to 0º and 90 º. Regarding the results of ICC intra-observer, we can say that M5 was that these values come closest to the cut point. M1 and M2 are measures which are less than satisfactory results. At 60 ° there are more satisfactory values, followed by 0º and 30º, when we approached the highest levels, such as 90 ° and 120 °, these values tend to decrease. The inter-observer reliability for M1 only to 90 º and 120 ºcorrelation values were below the cutpoint. In M2, only 60 of the sagital plane did not have values above the cutpoint in M3 only 30º sagital plane did not obtain values above the cut-point, the same goes for M3 at 30 ° plane of the scapula and M5 at 30 º and 120 º in frontal plane. Conclusion: The results of this study indicate that the methodology in question has a high degree of inter-observer reliability, as far as intra-observer reliability is the degree of similarity is not as high. Also the error of measure did not exceed 1.5 cm, and is considered low. In concurrent validity conclude that it is at 0 ° the methodology becomes a valid option for the measurement of distances measured with a good to excellent agreement with the FOB. The measures considered as valid options in different amplitudes, can function as clinical parameters to characterize the positioning of the shoulder blade and could further contribute to the characterization of the orientation of the same.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cryocoolers have been progressively replacing the use of the stored cryogens in cryogenic chains used for detector cooling, thanks to their higher and higher reliability. However, the mechanical vibrations, the electromagnetic interferences and the temperature fluctuations inherent to their functioning could reduce the sensor’s sensitivity. In order to minimize this problem, compact thermal energy storage units (ESU) are studied, devices able to store thermal energy without significant temperature increase. These devices can be used as a temporary cold source making it possible to turn the cryocooler OFF providing a proper environment for the sensor. A heat switch is responsible for the thermal decoupling of the ESU from the cryocooler’s temperature that increases when turned OFF. In this work, several prototypes working around 40 K were designed, built and characterized. They consist in a low temperature cell that contains the liquid neon connected to an expansion volume at room temperature for gas storage during the liquid evaporation phase. To turn this system insensitive to the gravity direction, the liquid is retained in the low temperature cell by capillary effect in a porous material. Thanks to pressure regulation of the liquid neon bath, 900 J were stored at 40K. The higher latent heat of the liquid and the inexistence of triple point transitions at 40 K turn the pressure control during the evaporation a versatile and compact alternative to an ESU working at the triple point transitions. A quite compact second prototype ESU directly connected to the cryocooler cold finger was tested as a temperature stabilizer. This device was able to stabilize the cryocooler temperature ((≈ 40K ±1 K) despite sudden heat bursts corresponding to twice the cooling power of the cryocooler. This thesis describes the construction of these devices as well as the tests performed. It is also shown that the thermal model developed to predict the thermal behaviour of these devices, implemented as a software,describes quite well the experimental results. Solutions to improve these devices are also proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cryocoolers have been progressively replacing the use of the stored cryogens in cryogenic chains used for detector cooling, thanks to their higher and higher reliability. However, the mechanical vibrations, the electromagnetic interferences and the temperature fluctuations inherent to their functioning could reduce the sensor’s sensitivity. In order to minimize this problem, compact thermal energy storage units (ESU) are studied, devices able to store thermal energy without significant temperature increase. These devices can be used as a temporary cold source making it possible to turn the cryocooler OFF providing a proper environment for the sensor. A heat switch is responsible for the thermal decoupling of the ESU from the cryocooler’s temperature that increases when turned OFF. In this work, several prototypes working around 40 K were designed, built and characterized. They consist in a low temperature cell that contains the liquid neon connected to an expansion volume at room temperature for gas storage during the liquid evaporation phase. To turn this system insensitive to the gravity direction, the liquid is retained in the low temperature cell by capillary effect in a porous material. Thanks to pressure regulation of the liquid neon bath, 900 J were stored at 40K. The higher latent heat of the liquid and the inexistence of triple point transitions at 40 K turn the pressure control during the evaporation a versatile and compact alternative to an ESU working at the triple point transitions. A quite compact second prototype ESU directly connected to the cryocooler cold finger was tested as a temperature stabilizer. This device was able to stabilize the cryocooler temperature ((≈ 40K ±1 K) despite sudden heat bursts corresponding to twice the cooling power of the cryocooler. This thesis describes the construction of these devices as well as the tests performed. It is also shown that the thermal model developed to predict the thermal behaviour of these devices,implemented as a software, describes quite well the experimental results. Solutions to improve these devices are also proposed.