43 resultados para self-sampling for HPV testing
Resumo:
As doenças cardiovasculares (DCV’s) são a maior causa de mortalidade e morbilidade em Portugal. O seu elevado impacto passa pelo desconhecimento, sub-diagnóstico, elevada prevalência e descontrolo dos seus principais factores de risco (clássicos e novos marcadores bioquímicos). Para o diagnóstico de uma das vertentes da doença cardiovascular, a doença cardíaca isquémica, a prova de esforço (PE) é o exame não invasivo, de baixo custo, com reduzida taxa de complicações e de fácil execução, mais usado na clínica. O objectivo deste estudo é averiguar se existe relação entre a prova de esforço, os factores de risco cardiovascular (FR’s) e alguns dos seus marcadores bioquímicos. Com vista a alcançar objectivo realizou-se um estudo prospectivo, longitudinal e descritivo, na Esferasaúde (Maia), entre Janeiro e Maio de 2011. Foram recolhidos dados, por inquérito, referentes a: biografia, antropometria, FR’s, medicação, PE e análises clínicas. Tendo sido incluídos todos os indivíduos (idade ≥ 18 anos) que tenham realizado prova de esforço e análises na unidade citada e com diferença temporal máxima de 2 meses, pelo método de amostragem dirigida e intencional. A dimensão amostral situou-se nos 30 elementos, sendo que 19 eram do género masculino. A média de idade foi 49,43±15,39 anos. Estimou-se a prevalência de FR’s e de indivíduos com valores dos marcadores bioquímicos anormais. Dois dos indivíduos apresentavam história de DCV’s e três deles PE positiva. Foram efectuadas diversas tentativas de associação entre as variáveis integradas no estudo - DCV e FR’s; PE e FR’s; PE e marcadores bioquímicos; capacidade de esforço e FR’s, género e resultado PE. Nenhuma relação se revelou significativa, com excepção para dois casos: relação entre as DCV’s e o aparecimento de alterações na PE (p = 0,002) e associação entre PE e colesterol HDL (p=0,040). Para α de 5%. Conclui-se que não existe relação aparente entre a prova de esforço, a existência de doença cardiovascular, os seus factores de risco e marcadores bioquímicos.
Resumo:
Mestrado em Engenharia Geotécnica e Geoambiente
Resumo:
As classificações geomecânicas são uma das abordagens mais reconhecidas para estimar a qualidade do maciço rochoso, face à sua simplicidade e competência para gerir incertezas. As incertezas geológicas e geotécnicas podem ser avaliadas de forma eficaz usando classificações adequadas. Este estudo pretende enfatizar a importância das classificações geomecânicas e índices geomecânicos, tais como a Rock Mass Rating (RMR), a Rock Tunnelling Quality Index (Qsystem), o Geological Strength Index (GSI) e o Hydro‐Potential (HP) Value, para ajuizar a qualidade do maciço rochoso granítico das galerias subterrâneas de Paranhos (setor de Carvalhido ‐ Burgães; área urbana do Porto). Em particular, o valor hidro‐potencial (HP‐value) é uma classificação semi‐quantitativa aplicada a maciços rochosos que permite estimar as infiltrações de água subterrânea em escavações de terrenos rochosos. Para esta avaliação foi compilada e integrada uma extensa base de dados geológico‐geotécnica e geomecânica, apoiada na técnica de amostragem linear de superfícies expostas descontinuidades. Para refinar o zonamento geotécnico do maciço rochoso granítico, previamente realizado em 2010, foram coletadas amostras de rocha em pontos‐chave, com o objetivo de avaliar a sua resistência através do Ensaio de Carga Pontual (PLT). A aplicação das classificações geomecânicas foi realizada de uma forma equilibrada, estabelecendo diferentes cenários e tendo sempre em conta o conhecimento das características do maciço in situ. Apresenta‐se uma proposta de zonamento hidrogeomecânico com o objetivo de compreender melhor a circulação geo‐hidráulica do maciço rochoso granítico. Pretende‐se com esta metodologia contribuir para aprofundar o conhecimento do substrato rochoso do Porto, nomeadamente no que diz respeito ao seu comportamento geomecânico.
Resumo:
The process of immobilization of biological molecules is one of the most important steps in the construction of a biosensor. In the case of DNA, the way it exposes its bases can result in electrochemical signals to acceptable levels. The use of self-assembled monolayer that allows a connection to the gold thiol group and DNA binding to an aldehydic ligand resulted in the possibility of determining DNA hybridization. Immobilized single strand of DNA (ssDNA) from calf thymus pre-formed from alkanethiol film was formed by incubating a solution of 2-aminoethanothiol (Cys) followed by glutaraldehyde (Glu). Cyclic voltammetry (CV) was used to characterize the self-assembled monolayer on the gold electrode and, also, to study the immobilization of ssDNA probe and hybridization with the complementary sequence (target ssDNA). The ssDNA probe presents a well-defined oxidation peak at +0.158 V. When the hybridization occurs, this peak disappears which confirms the efficacy of the annealing and the DNA double helix performing without the presence of electroactive indicators. The use of SAM resulted in a stable immobilization of the ssDNA probe, enabling the hybridization detection without labels. This study represents a promising approach for molecular biosensor with sensible and reproducible results.
Resumo:
Background: Temporal lobe epilepsy (TLE) is a neurological disorder that directly affects cortical areas responsible for auditory processing. The resulting abnormalities can be assessed using event-related potentials (ERP), which have high temporal resolution. However, little is known about TLE in terms of dysfunction of early sensory memory encoding or possible correlations between EEGs, linguistic deficits, and seizures. Mismatch negativity (MMN) is an ERP component – elicited by introducing a deviant stimulus while the subject is attending to a repetitive behavioural task – which reflects pre-attentive sensory memory function and reflects neuronal auditory discrimination and perceptional accuracy. Hypothesis: We propose an MMN protocol for future clinical application and research based on the hypothesis that children with TLE may have abnormal MMN for speech and non-speech stimuli. The MMN can be elicited with a passive auditory oddball paradigm, and the abnormalities might be associated with the location and frequency of epileptic seizures. Significance: The suggested protocol might contribute to a better understanding of the neuropsychophysiological basis of MMN. We suggest that in TLE central sound representation may be decreased for speech and non-speech stimuli. Discussion: MMN arises from a difference to speech and non-speech stimuli across electrode sites. TLE in childhood might be a good model for studying topographic and functional auditory processing and its neurodevelopment, pointing to MMN as a possible clinical tool for prognosis, evaluation, follow-up, and rehabilitation for TLE.
Resumo:
The tribological response of multilayer micro/nanocrystalline diamond coatings grown by the hot filament CVD technique is investigated. These multigrade systems were tailored to comprise a starting microcrystalline diamond (MCD) layer with high adhesion to a silicon nitride (Si3N4) ceramic substrate, and a top nanocrystalline diamond (NCD) layer with reduced surface roughness. Tribological tests were carried out with a reciprocating sliding configuration without lubrication. Such composite coatings exhibit a superior critical load before delamination (130–200 N), when compared to the mono- (60–100 N) and bilayer coatings (110 N), considering ∼10 µm thick films. Regarding the friction behaviour, a short-lived initial high friction coefficient was followed by low friction regimes (friction coefficients between 0.02 and 0.09) as a result of the polished surfaces tailored by the tribological solicitation. Very mild to mild wear regimes (wear coefficient values between 4.1×10−8 and 7.7×10−7 mm3 N−1 m−1) governed the wear performance of the self-mated multilayer coatings when subjected to high-load short-term tests (60–200 N; 2 h; 86 m) and medium-load endurance tests (60 N; 16 h; 691 m).
Resumo:
A new general fitting method based on the Self-Similar (SS) organization of random sequences is presented. The proposed analytical function helps to fit the response of many complex systems when their recorded data form a self-similar curve. The verified SS principle opens new possibilities for the fitting of economical, meteorological and other complex data when the mathematical model is absent but the reduced description in terms of some universal set of the fitting parameters is necessary. This fitting function is verified on economical (price of a commodity versus time) and weather (the Earth’s mean temperature surface data versus time) and for these nontrivial cases it becomes possible to receive a very good fit of initial data set. The general conditions of application of this fitting method describing the response of many complex systems and the forecast possibilities are discussed.
Resumo:
The hidden-node problem has been shown to be a major source of Quality-of-Service (QoS) degradation in Wireless Sensor Networks (WSNs) due to factors such as the limited communication range of sensor nodes, link asymmetry and the characteristics of the physical environment. In wireless contention-based Medium Access Control protocols, if two nodes that are not visible to each other transmit to a third node that is visible to the formers, there will be a collision – usually called hidden-node or blind collision. This problem greatly affects network throughput, energy-efficiency and message transfer delays, which might be particularly dramatic in large-scale WSNs. This technical report tackles the hidden-node problem in WSNs and proposes HNAMe, a simple yet efficient distributed mechanism to overcome it. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes and then scales to multiple clusters via a cluster grouping strategy that guarantees no transmission interference between overlapping clusters. We also show that the H-NAMe mechanism can be easily applied to the IEEE 802.15.4/ZigBee protocols with only minor add-ons and ensuring backward compatibility with the standard specifications. We demonstrate the feasibility of H-NAMe via an experimental test-bed, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. We believe that the results in this technical report will be quite useful in efficiently enabling IEEE 802.15.4/ZigBee as a WSN protocol.
Resumo:
In this work, an experimental study was performed on the influence of plug-filling, loading rate and temperature on the tensile strength of single-strap (SS) and double-strap (DS) repairs on aluminium structures. Whilst the main purpose of this work was to evaluate the feasibility of plug-filling for the strength improvement of these repairs, a parallel study was carried out to assess the sensitivity of the adhesive to external features that can affect the repairs performance, such as the rate of loading and environmental temperature. The experimental programme included repairs with different values of overlap length (L O = 10, 20 and 30 mm), and with and without plug-filling, whose results were interpreted in light of experimental evidence of the fracture modes and typical stress distributions for bonded repairs. The influence of the testing speed on the repairs strength was also addressed (considering 0.5, 5 and 25 mm/min). Accounting for the temperature effects, tests were carried out at room temperature (≈23°C), 50 and 80°C. This permitted a comparative evaluation of the adhesive tested below and above the glass transition temperature (T g), established by the manufacturer as 67°C. The combined influence of these two parameters on the repairs strength was also analysed. According to the results obtained from this work, design guidelines for repairing aluminium structures were
Resumo:
Solving systems of nonlinear equations is a very important task since the problems emerge mostly through the mathematical modelling of real problems that arise naturally in many branches of engineering and in the physical sciences. The problem can be naturally reformulated as a global optimization problem. In this paper, we show that a self-adaptive combination of a metaheuristic with a classical local search method is able to converge to some difficult problems that are not solved by Newton-type methods.
Resumo:
With advancement in computer science and information technology, computing systems are becoming increasingly more complex with an increasing number of heterogeneous components. They are thus becoming more difficult to monitor, manage, and maintain. This process has been well known as labor intensive and error prone. In addition, traditional approaches for system management are difficult to keep up with the rapidly changing environments. There is a need for automatic and efficient approaches to monitor and manage complex computing systems. In this paper, we propose an innovative framework for scheduling system management by combining Autonomic Computing (AC) paradigm, Multi-Agent Systems (MAS) and Nature Inspired Optimization Techniques (NIT). Additionally, we consider the resolution of realistic problems. The scheduling of a Cutting and Treatment Stainless Steel Sheet Line will be evaluated. Results show that proposed approach has advantages when compared with other scheduling systems
Resumo:
This paper proposes a novel agent-based approach to Meta-Heuristics self-configuration. Meta-heuristics are algorithms with parameters which need to be set up as efficient as possible in order to unsure its performance. A learning module for self-parameterization of Meta-heuristics (MH) in a Multi-Agent System (MAS) for resolution of scheduling problems is proposed in this work. The learning module is based on Case-based Reasoning (CBR) and two different integration approaches are proposed. A computational study is made for comparing the two CBR integration perspectives. Finally, some conclusions are reached and future work outlined.
Resumo:
To increase the amount of logic available in SRAM-based FPGAs manufacturers are using nanometric technologies to boost logic density and reduce prices. However, nanometric scales are highly vulnerable to radiation-induced faults that affect values stored in memory cells. Since the functional definition of FPGAs relies on memory cells, they become highly prone to this type of faults. Fault tolerant implementations, based on triple modular redundancy (TMR) infrastructures, help to keep the correct operation of the circuit. However, TMR is not sufficient to guarantee the safe operation of a circuit. Other issues like the effects of multi-bit upsets (MBU) or fault accumulation, have also to be addressed. Furthermore, in case of a fault occurrence the correct operation of the affected module must be restored and the current state of the circuit coherently re-established. A solution that enables the autonomous correct restoration of the functional definition of the affected module, avoiding fault accumulation, re-establishing the correct circuit state in realtime, while keeping the normal operation of the circuit, is presented in this paper.
Resumo:
The new generations of SRAM-based FPGA (field programmable gate array) devices are the preferred choice for the implementation of reconfigurable computing platforms intended to accelerate processing in real-time systems. However, FPGA's vulnerability to hard and soft errors is a major weakness to robust configurable system design. In this paper, a novel built-in self-healing (BISH) methodology, based on run-time self-reconfiguration, is proposed. A soft microprocessor core implemented in the FPGA is responsible for the management and execution of all the BISH procedures. Fault detection and diagnosis is followed by repairing actions, taking advantage of the dynamic reconfiguration features offered by new FPGA families. Meanwhile, modular redundancy assures that the system still works correctly
Resumo:
One of the most important measures to prevent wild forest fires is the use of prescribed and controlled burning actions as it reduce the fuel mass availability. The impact of these management activities on soil physical and chemical properties varies according to the type of both soil and vegetation. Decisions in forest management plans are often based on the results obtained from soil-monitoring campaigns. Those campaigns are often man-labor intensive and expensive. In this paper we have successfully used the multivariate statistical technique Robust Principal Analysis Compounds (ROBPCA) to investigate on the sampling procedure effectiveness for two different methodologies, in order to reflect on the possibility of simplifying and reduce the sampling collection process and its auxiliary laboratory analysis work towards a cost-effective and competent forest soil characterization.