995 resultados para Proper
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
RESUMO: Considerando que a pressão arterial elevada constitui um dos maiores fatores de risco para as doenças cardiovasculares, a sua associação ao consumo elevado de sal, e o facto das escolas constituírem ambientes de excelência para a aquisição de bons hábitos alimentes e promoção da saúde, o objetivo deste estudo foi avaliar o conteúdo de sal presente nas refeições escolares e a perceção dos consumidores sobre o sabor salgado. A quantificação de sal foi realizada com um medidor portátil. Para avaliar a perceção dos consumidores foi desenvolvido e aplicado um questionário a alunos das escolas preparatórias e secundárias e aos responsáveis pela preparação e confeção das refeições. Foram analisadas um total de 898 componentes de refeições, incluindo refeições escolares e de restauração padronizada. Em média, as refeições escolares disponibilizam entre 2,83 a 3,82 g de sal por porção servida (p=0,05), o que representa entre duas a cinco vezes mais as necessidades das crianças e jovens. Os componentes das refeições padronizadas apresentam um valor médio de sal que varia entre 0,8 e 2,57 g por porção (p=0,05), o que pode contribuir para um valor total de sal por refeição mais elevado comparativamente com as refeições escolares. O sabor das refeições é percecionado como sendo nem salgado nem insonso para a maioria dos estudantes, o que parece demonstrar habituação à intensidade/ quantidade de sal consumida. Os responsáveis pelas refeições, apesar de apresentarem conhecimentos sobre sal e a necessidade da sua limitação, demonstram barreiras e limitações e perceções à sua redução. A realização de escolhas alimentares saudáveis e adequadas só é possível se suportada por um ambiente facilitador dessas mesmas escolas. O impacto que o consumo de sal tem na saúde, em particular nas doenças crónicas, torna imperativa a implementação de estratégias de redução de sal ao nível da indústria e dos serviços de catering e restauração, em particular direcionadas para o público mais jovem.------------ABSTRACT Considering the fact that high blood pressure is a major rick factor for cardiovascular disease and its association to salt intake and the fact that schools are considered ideal environments to promote health and proper eating habits, the objective of this study was to evaluate the amount of salt in meals served in school canteens and consumers perceptions about salt. Quantification of salt was performed using a portable salt meter - PAL ES2. For food perception we constructed a questionnaire that was applied to students from high schools. A total of 898 food samples were analysed. Bread presents the highest value with a mean of 1.35 (SD=0.12). Salt in soups ranges from 0.72 g/100 g to 0.80 g/100 g (p=0.05) and main courses from 0.71 g/100 to 0.97 g/100g (p=0.05). Salt in school meals is high with a mean value from 2.83 to 3.82 g of salt per meal, which is between 2 and 5 times more than the RDA for children. The components of standardized meals have an average value of salt ranging from 0.8 to 2.57 g per serving, which may contribute to a higher intake of salt per meal compared to school meals. Moreover, a high percentage of students consider meals neither salty nor lacking in salt, which shows they are used to the intensity/amount of salt consumed. Despite the knowledge and perceived necessity about salt reduction, those responsible for cooking and preparing meals, still demonstrate barriers and limitation in doing so. Making healthy choices is only possible if backed up by an environment where such choices are accessible. Therefore salt reduction strategies, aimed at the food industry and catering services should be implemented, with children and young people targeted as a major priority.
Resumo:
Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm.
Resumo:
The processes of mobilization of land for infrastructures of public and private domain are developed according to proper legal frameworks and systematically confronted with the impoverished national situation as regards the cadastral identification and regularization, which leads to big inefficiencies, sometimes with very negative impact to the overall effectiveness. This project report describes Ferbritas Cadastre Information System (FBSIC) project and tools, which in conjunction with other applications, allow managing the entire life-cycle of Land Acquisition and Cadastre, including support to field activities with the integration of information collected in the field, the development of multi-criteria analysis information, monitoring all information in the exploration stage, and the automated generation of outputs. The benefits are evident at the level of operational efficiency, including tools that enable process integration and standardization of procedures, facilitate analysis and quality control and maximize performance in the acquisition, maintenance and management of registration information and expropriation (expropriation projects). Therefore, the implemented system achieves levels of robustness, comprehensiveness, openness, scalability and reliability suitable for a structural platform. The resultant solution, FBSIC, is a fit-for-purpose cadastre information system rooted in the field of railway infrastructures. FBSIC integrating nature of allows: to accomplish present needs and scale to meet future services; to collect, maintain, manage and share all information in one common platform, and transform it into knowledge; to relate with other platforms; to increase accuracy and productivity of business processes related with land property management.
Resumo:
During drilling operation, cuttings are produced downhole and must be removed to avoid issues which can lead to Non Productive Time (NPT). Most of stuck pipe and then Bottom-Hole Assembly (BHA) lost events are hole cleaned related. There are many parameters which help determine hole cleaning conditions, but a proper selection of the key parameters will facilitate monitoring hole cleaning conditions and interventions. The aim of Hole Cleaning Monitoring is to keep track of borehole conditions including hole cleaning efficiency and wellbore stability issues during drilling operations. Adequate hole cleaning is the one of the main concerns in the underbalanced drilling operations especially for directional and horizontal wells. This dissertation addresses some hole cleaning fundamentals which will act as the basis for recommendation practice during drilling operations. Understand how parameters such as Flowrate, Rotation per Minute (RPM), Rate of Penetration (ROP) and Mud Weight are useful to improve the hole cleaning performance and how Equivalent Circulate Density (ECD), Torque & Drag (T&D) and Cuttings Volumes coming from downhole help to indicate how clean and stable the well is. For case study, hole cleaning performance or cuttings volume removal monitoring, will be based on real-time measurements of the cuttings volume removal from downhole at certain time, taking into account Flowrate, RPM, ROP and Drilling fluid or Mud properties, and then will be plotted and compared to the volume being drilled expected. ECD monitoring will dictate hole stability conditions and T&D and Cuttings Volume coming from downhole monitoring will dictate how clean the well is. T&D Modeling Software provide theoretical calculated T&D trends which will be plotted and compared to the real-time measurements. It will use the measured hookloads to perform a back-calculation of friction factors along the wellbore.
Resumo:
Epistemology in philosophy of mind is a difficult endeavor. Those who believe that our phenomenal life is different from other domains suggest that self-knowledge about phenomenal properties is certain and therefore privileged. Usually, this so called privileged access is explained by the idea that we have direct access to our phenomenal life. This means, in contrast to perceptual knowledge, self-knowledge is non-inferential. It is widely believed that, this kind of directness involves two different senses: an epistemic sense and a metaphysical sense. Proponents of this view often claim that this is due to the fact that we are acquainted with our current experiences. The acquaintance thesis, therefore, is the backbone in justifying privileged access. Unfortunately the whole approach has a profound flaw. For the thesis to work, acquaintance has to be a genuine explanation. Since it is usually assumed that any knowledge relation between judgments and the corresponding objects are merely causal and contingent (e.g. in perception), the proponent of the privileged access view needs to show that acquaintance can do the job. In this thesis, however, I claim that the latter cannot be done. Based on considerations introduced by Levine, I conclude that this approach involves either the introduction of ontologically independent properties or a rather obscure knowledge relation. A proper explanation, however, cannot employ either of the two options. The acquaintance thesis is, therefore, bound to fail. Since the privileged access intuition seems to be vital to epistemology within the philosophy of mind, I will explore alternative justifications. After discussing a number of options, I will focus on the so called revelation thesis. This approach states that by simply having an experience with phenomenal properties, one is in the position to know the essence of those phenomenal properties. I will argue that, after finding a solution for the controversial essence claim, this thesis is a successful replacement explanation which maintains all the virtues of the acquaintance account without necessarily introducing ontologically independent properties or an obscure knowledge relation. The overall solution consists in qualifying the essence claim in the relevant sense, leaving us with an appropriate ontology for phenomenal properties. On the one hand, this avoids employing mysterious independent properties, since this ontological view is physicalist in nature. On the other hand, this approach has the right kind of structure to explain privileged self-knowledge of our phenomenal life. My final conclusion consists in the claim that the privileged access intuition is in fact veridical. It cannot, however, be justified by the popular acquaintance approach, but rather, is explainable by the controversial revelation thesis.
Resumo:
The goal of this thesis is the study of a tool that can help analysts in finding sequential patterns. This tool will have a focus on financial markets. A study will be made on how new and relevant knowledge can be mined from real life information, potentially giving investors, market analysts, and economists new basis to make informed decisions. The Ramex Forum algorithm will be used as a basis for the tool, due to its ability to find sequential patterns in financial data. So that it further adapts to the needs of the thesis, a study of relevant improvements to the algorithm will be made. Another important aspect of this algorithm is the way that it displays the patterns found, even with good results it is difficult to find relevant patterns among all the studied samples without a proper result visualization component. As such, different combinations of parameterizations and ways to visualize data will be evaluated and their influence in the analysis of those patterns will be discussed. In order to properly evaluate the utility of this tool, case studies will be performed as a final test. Real information will be used to produce results and those will be evaluated in regards to their accuracy, interest, and relevance.
Resumo:
Conventionally the problem of the best path in a network refers to the shortest path problem. However, for the vast majority of networks present nowadays this solution has some limitations which directly affect their proper functioning, as well as an inefficient use of their potentialities. Problems at the level of large networks where graphs of high complexity are commonly present as well as the appearing of new services and their respective requirements, are intrinsically related to the inability of this solution. In order to overcome the needs present in these networks, a new approach to the problem of the best path must be explored. One solution that has aroused more interest in the scientific community considers the use of multiple paths between two network nodes, where they can all now be considered as the best path between those nodes. Therefore, the routing will be discontinued only by minimizing one metric, where only one path between nodes is chosen, and shall be made by the selection of one of many paths, thereby allowing the use of a greater diversity of the present paths (obviously, if the network consents). The establishment of multi-path routing in a given network has several advantages for its operation. Its use may well improve the distribution of network traffic, improve recovery time to failure, or it can still offer a greater control of the network by its administrator. These factors still have greater relevance when networks have large dimensions, as well as when their constitution is of high complexity, such as the Internet, where multiple networks managed by different entities are interconnected. A large part of the growing need to use multipath protocols is associated to the routing made based on policies. Therefore, paths with different characteristics can be considered with equal level of preference, and thus be part of the solution for the best way problem. To perform multi-path routing using protocols based only on the destination address has some limitations but it is possible. Concepts of graph theory of algebraic structures can be used to describe how the routes are calculated and classified, enabling to model the routing problem. This thesis studies and analyzes multi-path routing protocols from the known literature and derives a new algebraic condition which allows the correct operation of these protocols without any network restriction. It also develops a range of software tools that allows the planning and the respective verification/validation of new protocols models according to the study made.
Resumo:
Throughout the brain, patterns of activity in postsynaptic neurons influence the properties of synaptic inputs. Such feedback regulation is central to neural network stability that underlies proper information processing and feature representation in the central nervous system. At the cellular level, tight coupling of presynaptic and postsynaptic function is fundamental to neural computation and synaptic plasticity. The cohort of protein complexes at the pre and postsynaptic membrane allows for tight synapse-specific segregation and integration of diverse molecular and electrical signals.(...)
Resumo:
Unlike injury to the peripheral nervous system (PNS), where injured neurons can trigger a regenerative program that leads to axonal elongation and in some cases proper reinnervation, after injury to the central nervous system (CNS) neurons fail to produce the same response. The regenerative program includes the activation of several injury signals that will lead to the expression of genes associated with axonal regeneration. As a consequence, the spawned somatic response will ensure the supply of molecular components required for axonal elongation. The capacity of some neurons to trigger a regenerative response has led to investigate the mechanisms underlying neuronal regeneration. Thus, non-regenerative models (like injury to the CNS) and regenerative models (such as injury to the PNS) were used to understand the differences underlying those two responses to injury. To do so, the regenerative properties of dorsal root ganglion (DRG) neurons were addressed. This particular type of neurons possesses two branches, a central axon, that has a limited capacity to regenerate; and a peripheral axon, where regeneration can occur over long distances. In the first paradigm used to understand the neuronal regeneration mechanisms, we evaluated the activation of injury signals in a non-regenerative model. Injury signals include the positive injury signals, which are described as being enhancers of axonal regeneration by activating several transcription factors. The currently known positive injury signals are ERK, JNK and STAT3. To evaluate whether the lack of regeneration following injury to the central branch of DRG neurons was due to inactivation of these signals, activation of the transcription factors pELK-1, p-c-jun (downstream targets of ERK and JNK, respectively) and pSTAT3 were examined. Results have shown no impairment in the activation of these signals. As a consequence, we further proceed with evaluation of other candidates that could participate in axonal regeneration failure. By comparing the protein profiles that were triggered following either injury to the central branch of DRG neurons or injury to their peripheral branch, we were able to identify high levels of GSK3-β, ROCKII and HSP-40 after injury to the central branch of DRG neurons. While in vitro knockdown of HSP-40 in DRG neurons showed to be toxic for the cells, evaluation of pCRMP2 (a GSK3-β downstream target) and pMLC (a ROCKII downstream target), which are known to impair axonal regeneration, revealed high levels of both proteins following injury to the central branch when comparing with injury to their peripheral one. Altogether, these results suggest that activation of positive injury signals is not sufficient to elicit axonal regeneration; HSP-40 is likely to participate in the cell survival program; whereas GSK3-β and ROCKII activity may condition the regenerative capacity following injury to the nervous system.(...)
Resumo:
RESUMO: Introdução. O cancro de bexiga é uma patologia comum que representa o 6° e o 5° cancro mais incidente em Portugal e na Itália, respetivamente. Em mais de metade dos casos ocorre reincidência durante o primeiro ano, requerendo acompanhamento clínico ao longo da vida. A instilação intravesical de Bacillus Calmette-Guérin (BCG) (uma estirpe atenuada do Mycobacterium bovis) representa uma imunoterapia eficaz no combate ao cancro de bexiga, no entanto, muitos aspetos da interação de BCG com as células tumorais bem como com as células do sistema imunitário permanecem por desvendar. As células tumorais de bexiga expressam frequentemente as formas sialiladas dos antigénios de Thomsen-Friedenreich (TF), i.e., sialil-T (sT) e sialil-Tn (sTn). Contudo ainda se desconhece o significado da sua expressão na malignidade tumoral e se afeta a eficácia da terapêutica BCG. Objetivo do estudo. Investigar o papel dos antigénios sT e sTn no fenótipo maligno de células de cancro de bexiga bem como na resposta mediada pelo sistema imunitário à terapia com BCG. Metodologia. Para tal, foram utilizadas as linhas celulares de cancro da bexiga HT1376 e MCR, geneticamente modificadas por transdução com vetores codificantes para as sialiltransferases ST3GAL1 ou ST6GALNAC1, de forma a expressar homogeneamente os antigénios sT ou sTn respetivamente. Estes modelos celulares foram estudados após confronto com BCG. O nível de BCG internalizado foi avaliado por citometria de fluxo. O perfil global de expressão genética dos modelos celulares antes e após incubação com BCG foi analisado pela tecnologia de microarray. O perfil de citocinas secretadas pelos modelos celulares após incubação com BCG, bem como de macrófagos estimulados pelo secretoma de células de cancro de bexiga que por sua vez foram estimuladas previamente por BCG, foi estudado pelo sistema multiplex de “imuno-esferas”. Resultados. A análise do transcritoma dos modelos celulares revelou que grupos de genes envolvidos em funções específicas foram modulados em paralelo nos dois modelos celulares, após transdução, independentemente da sialiltransferase expressa. Ou seja, em células que expressavam a sialiltransferase ST3GAL1 ou ST6GALNAC1, os genes envolvidos na regulação da segregação cromossómica e na reparação do DNA foram consistentemente regulados negativamente. Genes descritos na literatura como marcadores para o cancro de bexiga foram também modulados. A incubação com BCG resultou numa tendência ao aumento da expressão de genes relevantes na preservação e estabilidade genómica e menor malignidade, no entanto, apenas em células que expressavam sT ou sTn. Entre as dez citocinas testadas, apenas a IL-6 e IL-8 foram expressas pelas linhas celulares de cancro da bexiga, com indução destas após estimulação com BCG, e principalmente em células que expressavam ST3GAL1 ou ST6GALNAC1. Em macrófagos, citocinas inflamatórias, tais como IL-1β, IL-6 e TNFα, e a citocina anti-inflamatória IL-10, foram induzidas apenas pelo secretoma de células de cancro da bexiga confrontadas com BCG, com maior relevância quando estas expressavam ST3GAL1 ou ST6GALNAC1, prevendo a estimulação de macrófagos semelhantes aos de tipo M1 e uma melhor resposta à terapia com BCG. Conclusões. O efeito geral da expressão destas sialiltransferases e dos produtos enzimáticos sT ou sTn nas células de cancro de bexiga conduz a um fenótipo de maior malignidade. Contudo, a maior avidez de estas na produção de citocinas inflamatórias após confronto com BCG, bem como a maior capacidade de estimulação de macrófagos, predirá uma resposta à terapia com BCG mais eficaz em tumores que expressem os antigénios de TF sialilados. Tais conclusões são totalmente concordantes com os nossos mais recentes dados clínicos obtidos em colaboração, que mostram que em doentes com cancro de bexiga que expressam sTn respondem melhor a terapia BCG. ----------ABSTRACT: Background. Bladder cancer is a common malignancy representing the 6th and the 5th most incident cancer in Portugal and in Italy, respectively. More than half of the cases relapse within one year, requiring though a lifelong follow-up. Intravesical instillation of Bacillus Calmette-Guérin (BCG) (an attenuated strain of Mycobacterium bovis) represents an effective immunotherapy of bladder cancer, although many aspects of the interaction of BCG with cancer cells and host immune cells remain obscure. Bladder cancer cells often express the sialylated forms of the Thomsen-Friedenreich (TF), i.e., sialil-T (sT) e sialil-Tn (sTn). However, it’s still unknown the sense of such expression in tumour malignancy and in the BCG therapy efficacy. Aim of the study. To investigate the role of the sT and sTn antigens on the malignant phenotype of bladder cancer cells and the immune mediated response to BCG therapy. Experimental. We have utilized populations of the bladder cancer cell lines HT1376 and MCR, genetically modified by transduction with the sialyltransferases ST3GAL1 or ST6GALNAC1 to express homogeneously sT or sTn antigens. The level of BCG internalized was assessed by flow cytometry. The whole gene expression profile of BCG-challenged or unchallenged bladder cancer cell lines was studied by microarray technology. The profile of cytokines secreted by BCG-challenged bladder cancer cells and that of macrophages challenged by the secretome of BCG-challenged bladder cancer cells was studied by multiplex immune-beads assay. Results. Transcriptome analysis of the sialyltransferase-transduced cells revealed that groups of genes involved in specific functions were regulated in parallel in the two cell lines, regardless the sialyltransferase expressed. Namely, in sialyltransferase-expressing cells, genes involved in the proper chromosomal segregation and in the DNA repair were consistently down-regulated, while genes reported in literature as markers for bladder cancer were modulated. BCG-challenging induced a tendency to up-regulation of the genes preserving genomic stability and reducing malignancy, but only in cells expressing either sT or sTn. Among the ten cytokines tested, only IL-6 and IL-8 were expressed by bladder cancer cell lines and up-regulated by BCG-challenging, mainly in sialyltransferases-expressing cells. In macrophages, inflammatory cytokines, such as IL-1β, IL-6 and TNFα, and the antinflammatory IL-10 were induced only by the secretome of BCG-challenged bladder cancer cells, particularly when expressing either sialyltransferase, predicting the stimulation of M1-like macrophages and a better response to BCG therapy. Conclusions. The general effect of the expression of the two sialyltransferases and their products in the bladder cancer cells is toward a more malignant phenotype. However, the stronger ability of sialyltransferase expressing cells to produce inflammatory cytokines upon BCG-challenging and to stimulate macrophages predicts a more effective response to BCG in tumours expressing the sialylated TF antigens. This is fully consistent with our recent clinical data obtained in collaboration, showing that patients with bladder cancer expressing sTn respond better to BCG therapy.
Resumo:
The type of pulmonary histoplasmosis presents limited lesions to the lungs, with symptoms that are clinically and radiological similar to chronic pulmonary tuberculosis. This paper describes the clinical features of four cases of pulmonary histoplasmosis. Aspects of diagnostic and clinical, epidemiological, laboratorial and imaging exams are discussed, in addition to the clinical status of the individuals five years after disease onset. The treatment of choice was oral medication, following which all the patients improved. It is important to understand the clinical status and the difficulties concerning the differential diagnosis of histoplasmosis, to assist the proper indication of cases, thus reducing potential confusion with other diseases.
Resumo:
This study deals with investigating the groundwater quality for irrigation purpose, the vulnerability of the aquifer system to pollution and also the aquifer potential for sustainable water resources development in Kobo Valley development project. The groundwater quality is evaluated up on predicting the best possible distribution of hydrogeochemicals using geostatistical method and comparing them with the water quality guidelines given for the purpose of irrigation. The hydro geochemical parameters considered are SAR, EC, TDS, Cl-, Na+, Ca++, SO4 2- and HCO3 -. The spatial variability map reveals that these parameters falls under safe, moderate and severe or increasing problems. In order to present it clearly, the aggregated Water Quality Index (WQI) map is constructed using Weighted Arithmetic Mean method. It is found that Kobo-Gerbi sub basin is suffered from bad water quality for the irrigation purpose. Waja Golesha sub-basin has moderate and Hormat Golena is the better sub basin in terms of water quality. The groundwater vulnerability assessment of the study area is made using the GOD rating system. It is found that the whole area is experiencing moderate to high risk of vulnerability and it is a good warning for proper management of the resource. The high risks of vulnerability are noticed in Hormat Golena and Waja Golesha sub basins. The aquifer potential of the study area is obtained using weighted overlay analysis and 73.3% of the total area is a good site for future water well development. The rest 26.7% of the area is not considered as a good site for spotting groundwater wells. Most of this area fall under Kobo-Gerbi sub basin.
Resumo:
Cryocoolers have been progressively replacing the use of the stored cryogens in cryogenic chains used for detector cooling, thanks to their higher and higher reliability. However, the mechanical vibrations, the electromagnetic interferences and the temperature fluctuations inherent to their functioning could reduce the sensor’s sensitivity. In order to minimize this problem, compact thermal energy storage units (ESU) are studied, devices able to store thermal energy without significant temperature increase. These devices can be used as a temporary cold source making it possible to turn the cryocooler OFF providing a proper environment for the sensor. A heat switch is responsible for the thermal decoupling of the ESU from the cryocooler’s temperature that increases when turned OFF. In this work, several prototypes working around 40 K were designed, built and characterized. They consist in a low temperature cell that contains the liquid neon connected to an expansion volume at room temperature for gas storage during the liquid evaporation phase. To turn this system insensitive to the gravity direction, the liquid is retained in the low temperature cell by capillary effect in a porous material. Thanks to pressure regulation of the liquid neon bath, 900 J were stored at 40K. The higher latent heat of the liquid and the inexistence of triple point transitions at 40 K turn the pressure control during the evaporation a versatile and compact alternative to an ESU working at the triple point transitions. A quite compact second prototype ESU directly connected to the cryocooler cold finger was tested as a temperature stabilizer. This device was able to stabilize the cryocooler temperature ((≈ 40K ±1 K) despite sudden heat bursts corresponding to twice the cooling power of the cryocooler. This thesis describes the construction of these devices as well as the tests performed. It is also shown that the thermal model developed to predict the thermal behaviour of these devices, implemented as a software,describes quite well the experimental results. Solutions to improve these devices are also proposed.
Resumo:
Cryocoolers have been progressively replacing the use of the stored cryogens in cryogenic chains used for detector cooling, thanks to their higher and higher reliability. However, the mechanical vibrations, the electromagnetic interferences and the temperature fluctuations inherent to their functioning could reduce the sensor’s sensitivity. In order to minimize this problem, compact thermal energy storage units (ESU) are studied, devices able to store thermal energy without significant temperature increase. These devices can be used as a temporary cold source making it possible to turn the cryocooler OFF providing a proper environment for the sensor. A heat switch is responsible for the thermal decoupling of the ESU from the cryocooler’s temperature that increases when turned OFF. In this work, several prototypes working around 40 K were designed, built and characterized. They consist in a low temperature cell that contains the liquid neon connected to an expansion volume at room temperature for gas storage during the liquid evaporation phase. To turn this system insensitive to the gravity direction, the liquid is retained in the low temperature cell by capillary effect in a porous material. Thanks to pressure regulation of the liquid neon bath, 900 J were stored at 40K. The higher latent heat of the liquid and the inexistence of triple point transitions at 40 K turn the pressure control during the evaporation a versatile and compact alternative to an ESU working at the triple point transitions. A quite compact second prototype ESU directly connected to the cryocooler cold finger was tested as a temperature stabilizer. This device was able to stabilize the cryocooler temperature ((≈ 40K ±1 K) despite sudden heat bursts corresponding to twice the cooling power of the cryocooler. This thesis describes the construction of these devices as well as the tests performed. It is also shown that the thermal model developed to predict the thermal behaviour of these devices,implemented as a software, describes quite well the experimental results. Solutions to improve these devices are also proposed.