878 resultados para Observational techniques and algorithms
Resumo:
In GaAs-based pseudomorphic high-electron mobility transistor device structures, strain and composition of the InxGa1 (-) As-x channel layer are very important as they influence the electronic properties of these devices. In this context, transmission electron microscopy techniques such as (002) dark-field imaging, high-resolution transmission electron microscopy (HRTEM) imaging, scanning transmission electron microscopy-high angle annular dark field (STEM-HAADF) imaging and selected area diffraction, are useful. A quantitative comparative study using these techniques is relevant for assessing the merits and limitations of the respective techniques. In this article, we have investigated strain and composition of the InxGa1 (-) As-x layer with the mentioned techniques and compared the results. The HRTEM images were investigated with strain state analysis. The indium content in this layer was quantified by HAADF imaging and correlated with STEM simulations. The studies showed that the InxGa1 (-) As-x channel layer was pseudomorphically grown leading to tetragonal strain along the 001] growth direction and that the average indium content (x) in the epilayer is similar to 0.12. We found consistency in the results obtained using various methods of analysis.
Resumo:
Recent observations of the temperature anisotropies of the cosmic microwave background (CMB) favor an inflationary paradigm in which the scale factor of the universe inflated by many orders of magnitude at some very early time. Such a scenario would produce the observed large-scale isotropy and homogeneity of the universe, as well as the scale-invariant perturbations responsible for the observed (10 parts per million) anisotropies in the CMB. An inflationary epoch is also theorized to produce a background of gravitational waves (or tensor perturbations), the effects of which can be observed in the polarization of the CMB. The E-mode (or parity even) polarization of the CMB, which is produced by scalar perturbations, has now been measured with high significance. Con- trastingly, today the B-mode (or parity odd) polarization, which is sourced by tensor perturbations, has yet to be observed. A detection of the B-mode polarization of the CMB would provide strong evidence for an inflationary epoch early in the universe’s history.
In this work, we explore experimental techniques and analysis methods used to probe the B- mode polarization of the CMB. These experimental techniques have been used to build the Bicep2 telescope, which was deployed to the South Pole in 2009. After three years of observations, Bicep2 has acquired one of the deepest observations of the degree-scale polarization of the CMB to date. Similarly, this work describes analysis methods developed for the Bicep1 three-year data analysis, which includes the full data set acquired by Bicep1. This analysis has produced the tightest constraint on the B-mode polarization of the CMB to date, corresponding to a tensor-to-scalar ratio estimate of r = 0.04±0.32, or a Bayesian 95% credible interval of r < 0.70. These analysis methods, in addition to producing this new constraint, are directly applicable to future analyses of Bicep2 data. Taken together, the experimental techniques and analysis methods described herein promise to open a new observational window into the inflationary epoch and the initial conditions of our universe.
Resumo:
Surface mass loads come in many different varieties, including the oceans, atmosphere, rivers, lakes, glaciers, ice caps, and snow fields. The loads migrate over Earth's surface on time scales that range from less than a day to many thousand years. The weights of the shifting loads exert normal forces on Earth's surface. Since the Earth is not perfectly rigid, the applied pressure deforms the shape of the solid Earth in a manner controlled by the material properties of Earth's interior. One of the most prominent types of surface mass loading, ocean tidal loading (OTL), comes from the periodic rise and fall in sea-surface height due to the gravitational influence of celestial objects, such as the moon and sun. Depending on geographic location, the surface displacements induced by OTL typically range from millimeters to several centimeters in amplitude, which may be inferred from Global Navigation and Satellite System (GNSS) measurements with sub-millimeter precision. Spatiotemporal characteristics of observed OTL-induced surface displacements may therefore be exploited to probe Earth structure. In this thesis, I present descriptions of contemporary observational and modeling techniques used to explore Earth's deformation response to OTL and other varieties of surface mass loading. With the aim to extract information about Earth's density and elastic structure from observations of the response to OTL, I investigate the sensitivity of OTL-induced surface displacements to perturbations in the material structure. As a case study, I compute and compare the observed and predicted OTL-induced surface displacements for a network of GNSS receivers across South America. The residuals in three distinct and dominant tidal bands are sub-millimeter in amplitude, indicating that modern ocean-tide and elastic-Earth models well predict the observed displacement response in that region. Nevertheless, the sub-millimeter residuals exhibit regional spatial coherency that cannot be explained entirely by random observational uncertainties and that suggests deficiencies in the forward-model assumptions. In particular, the discrepancies may reveal sensitivities to deviations from spherically symmetric, non-rotating, elastic, and isotropic (SNREI) Earth structure due to the presence of the South American craton.
Resumo:
Several small scleractinian coral colonies were collected from a remote reef and transferred [to] the Louisiana Universities Marine Center (LUMCON) for in vitro reproductive and larval studies. The species used here were Porites astreoides and Diploria strigosa. Colony size was ~20 cm in diameter. Colonies were brought to the surface by liftbag and stored in modified ice coolers. They were transported from Freeport, TX to Cocodrie, LA by truck for nearly 15 hours where field conditions were simulated in waiting aquaria. This document describes the techniques and equipment that were used, how to outfit such aquaria, proper handling techniques for coral colonies, and several eventualities that the mariculturist should be prepared for in undertaking this endeavor. It will hopefully prevent many mistakes from being made.
Resumo:
Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.
Resumo:
Background and purpose: To compare external beam radiotherapy techniques for parotid gland tumours using conventional radiotherapy (RT), three-dimensional conformal radiotherapy (3DCRT), and intensity-modulated radiotherapy (IMRT). To optimise the IMRT techniques, and to produce an IMRT class solution.Materials and methods: The planning target volume (PTV), contra-lateral parotid gland, oral cavity, brain-stem, brain and cochlea were outlined on CT planning scans of six patients with parotid gland tumours. Optimised conventional RT and 3DCRT plans were created and compared with inverse-planned IMRT dose distributions using dose-volume histograms. The aim was to reduce the radiation dose to organs at risk and improve the PTV dose distribution. A beam-direction optimisation algorithm was used to improve the dose distribution of the IMRT plans, and a class solution for parotid gland IMRT was investigated.Results: 3DCRT plans produced an equivalent PTV irradiation and reduced the dose to the cochlea, oral cavity, brain, and other normal tissues compared with conventional RT. IMRT further reduced the radiation dose to the cochlea and oral cavity compared with 3DCRT. For nine- and seven-field IMRT techniques, there was an increase in low-dose radiation to non-target tissue and the contra-lateral parotid gland. IMRT plans produced using three to five optimised intensity-modulated beam directions maintained the advantages of the more complex IMRT plans, and reduced the contra-lateral parotid gland dose to acceptable levels. Three- and four-field non-coplanar beam arrangements increased the volume of brain irradiated, and increased PTV dose inhomogeneity. A four-field class solution consisting of paired ipsilateral coplanar anterior and posterior oblique beams (15, 45, 145 and 170o from the anterior plane) was developed which maintained the benefits without the complexity of individual patient optimisation.Conclusions: For patients with parotid gland tumours, reduction in the radiation dose to critical normal tissues was demonstrated with 3DCRT compared with conventional RT. IMRT produced a further reduction in the dose to the cochlea and oral cavity. With nine and seven fields, the dose to the contra-lateral parotid gland was increased, but this was avoided by optimisation of the beam directions. The benefits of IMRT were maintained with three or four fields when the beam angles were optimised, but were also achieved using a four-field class solution. Clinical trials are required to confirm the clinical benefits of these improved dose distributions.
Resumo:
The Growth, Learning and Development (GLAD) study aimed to examine how a broad range of factors influence child weight during the first year of life. Assessments were undertaken within a multidisciplinary team framework. The sample was drawn from the community and data collection was undertaken in the four Greater Belfast Trusts. Twohundred and thirty-four families took part, each receiving a total of five home visits during which physical growth, oral-motor skills and development were assessed. Psychosocial evaluation examined parent-child interaction, feeding and other parental and child characteristics using quantitative and observational techniques. This paper outlines the main findings and recommendations from the GLAD study.
Resumo:
The role of rhodopsin as a structural prototype for the study of the whole superfamily of G protein-coupled receptors (GPCRs) is reviewed in an historical perspective. Discovered at the end of the nineteenth century, fully sequenced since the early 1980s, and with direct three-dimensional information available since the 1990s, rhodopsin has served as a platform to gather indirect information on the structure of the other superfamily members. Recent breakthroughs have elicited the solution of the structures of additional receptors, namely the beta 1- and beta 2-adrenergic receptors and the A(2A) adenosine receptor, now providing an opportunity to gauge the accuracy of homology modeling and molecular docking techniques and to perfect the computational protocol. Notably, in coordination with the solution of the structure of the A(2A) adenosine receptor, the first "critical assessment of GPCR structural modeling and docking" has been organized, the results of which highlighted that the construction of accurate models, although challenging, is certainly achievable. The docking of the ligands and the scoring of the poses clearly emerged as the most difficult components. A further goal in the field is certainly to derive the structure of receptors in their signaling state, possibly in complex with agonists. These advances, coupled with the introduction of more sophisticated modeling algorithms and the increase in computer power, raise the expectation for a substantial boost of the robustness and accuracy of computer-aided drug discovery techniques in the coming years.
Resumo:
We present three natural language marking strategies based on fast and reliable shallow parsing techniques, and on widely available lexical resources: lexical substitution, adjective conjunction swaps, and relativiser switching. We test these techniques on a random sample of the British National Corpus. Individual candidate marks are checked for goodness of structural and semantic fit, using both lexical resources, and the web as a corpus. A representative sample of marks is given to 25 human judges to evaluate for acceptability and preservation of meaning. This establishes a correlation between corpus based felicity measures and perceived quality, and makes qualified predictions. Grammatical acceptability correlates with our automatic measure strongly (Pearson's r = 0.795, p = 0.001), allowing us to account for about two thirds of variability in human judgements. A moderate but statistically insignificant (Pearson's r = 0.422, p = 0.356) correlation is found with judgements of meaning preservation, indicating that the contextual window of five content words used for our automatic measure may need to be extended. © 2007 SPIE-IS&T.
Resumo:
Landslides and debris flows, commonly triggered by rainfall, pose a geotechnical risk causing disruption to transport routes and incur significant financial expenditure. With infrastructure maintenance budgets becoming ever more constrained, this paper provides an overview of some of the developing methods being implemented by Queen’s University, Belfast in collaboration with the Department for Regional Development to monitor the stability of two distinctly different infrastructure slopes in Northern Ireland. In addition to the traditional, intrusive ground investigative and laboratory testing methods, aerial LiDAR, terrestrial LiDAR, geophysical techniques and differential Global Positioning Systems have been used to monitor slope stability. Finally, a comparison between terrestrial LiDAR, pore water pressure and soil moisture deficit (SMD) is presented to outline the processes for a more informed management regime and to highlight the season relationship between landslide activity and the aforementioned parameters.
Resumo:
BACKGROUND: Diabetic retinopathy is an important cause of visual loss. Laser photocoagulation preserves vision in diabetic retinopathy but is currently used at the stage of proliferative diabetic retinopathy (PDR).
OBJECTIVES: The primary aim was to assess the clinical effectiveness and cost-effectiveness of pan-retinal photocoagulation (PRP) given at the non-proliferative stage of diabetic retinopathy (NPDR) compared with waiting until the high-risk PDR (HR-PDR) stage was reached. There have been recent advances in laser photocoagulation techniques, and in the use of laser treatments combined with anti-vascular endothelial growth factor (VEGF) drugs or injected steroids. Our secondary questions were: (1) If PRP were to be used in NPDR, which form of laser treatment should be used? and (2) Is adjuvant therapy with intravitreal drugs clinically effective and cost-effective in PRP?
ELIGIBILITY CRITERIA: Randomised controlled trials (RCTs) for efficacy but other designs also used.
REVIEW METHODS: Systematic review and economic modelling.
RESULTS: The Early Treatment Diabetic Retinopathy Study (ETDRS), published in 1991, was the only trial designed to determine the best time to initiate PRP. It randomised one eye of 3711 patients with mild-to-severe NPDR or early PDR to early photocoagulation, and the other to deferral of PRP until HR-PDR developed. The risk of severe visual loss after 5 years for eyes assigned to PRP for NPDR or early PDR compared with deferral of PRP was reduced by 23% (relative risk 0.77, 99% confidence interval 0.56 to 1.06). However, the ETDRS did not provide results separately for NPDR and early PDR. In economic modelling, the base case found that early PRP could be more effective and less costly than deferred PRP. Sensitivity analyses gave similar results, with early PRP continuing to dominate or having low incremental cost-effectiveness ratio. However, there are substantial uncertainties. For our secondary aims we found 12 trials of lasers in DR, with 982 patients in total, ranging from 40 to 150. Most were in PDR but five included some patients with severe NPDR. Three compared multi-spot pattern lasers against argon laser. RCTs comparing laser applied in a lighter manner (less-intensive burns) with conventional methods (more intense burns) reported little difference in efficacy but fewer adverse effects. One RCT suggested that selective laser treatment targeting only ischaemic areas was effective. Observational studies showed that the most important adverse effect of PRP was macular oedema (MO), which can cause visual impairment, usually temporary. Ten trials of laser and anti-VEGF or steroid drug combinations were consistent in reporting a reduction in risk of PRP-induced MO.
LIMITATION: The current evidence is insufficient to recommend PRP for severe NPDR.
CONCLUSIONS: There is, as yet, no convincing evidence that modern laser systems are more effective than the argon laser used in ETDRS, but they appear to have fewer adverse effects. We recommend a trial of PRP for severe NPDR and early PDR compared with deferring PRP till the HR-PDR stage. The trial would use modern laser technologies, and investigate the value adjuvant prophylactic anti-VEGF or steroid drugs.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005408.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
O desenvolvimento de sistemas computacionais é um processo complexo, com múltiplas etapas, que requer uma análise profunda do problema, levando em consideração as limitações e os requisitos aplicáveis. Tal tarefa envolve a exploração de técnicas alternativas e de algoritmos computacionais para optimizar o sistema e satisfazer os requisitos estabelecidos. Neste contexto, uma das mais importantes etapas é a análise e implementação de algoritmos computacionais. Enormes avanços tecnológicos no âmbito das FPGAs (Field-Programmable Gate Arrays) tornaram possível o desenvolvimento de sistemas de engenharia extremamente complexos. Contudo, o número de transístores disponíveis por chip está a crescer mais rapidamente do que a capacidade que temos para desenvolver sistemas que tirem proveito desse crescimento. Esta limitação já bem conhecida, antes de se revelar com FPGAs, já se verificava com ASICs (Application-Specific Integrated Circuits) e tem vindo a aumentar continuamente. O desenvolvimento de sistemas com base em FPGAs de alta capacidade envolve uma grande variedade de ferramentas, incluindo métodos para a implementação eficiente de algoritmos computacionais. Esta tese pretende proporcionar uma contribuição nesta área, tirando partido da reutilização, do aumento do nível de abstracção e de especificações algorítmicas mais automatizadas e claras. Mais especificamente, é apresentado um estudo que foi levado a cabo no sentido de obter critérios relativos à implementação em hardware de algoritmos recursivos versus iterativos. Depois de serem apresentadas algumas das estratégias para implementar recursividade em hardware mais significativas, descreve-se, em pormenor, um conjunto de algoritmos para resolver problemas de pesquisa combinatória (considerados enquanto exemplos de aplicação). Versões recursivas e iterativas destes algoritmos foram implementados e testados em FPGA. Com base nos resultados obtidos, é feita uma cuidada análise comparativa. Novas ferramentas e técnicas de investigação que foram desenvolvidas no âmbito desta tese são também discutidas e demonstradas.
Resumo:
Este trabalho surge do interesse em substituir os nós de rede óptica baseados maioritariamente em electrónica por nós de rede baseados em tecnologia óptica. Espera-se que a tecnologia óptica permita maiores débitos binários na rede, maior transparência e maior eficiência através de novos paradigmas de comutação. Segundo esta visão, utilizou-se o MZI-SOA, um dispositivo semicondutor integrado hibridamente, para realizar funcionalidades de processamento óptico de sinal necessárias em nós de redes ópticas de nova geração. Nas novas redes ópticas são utilizados formatos de modulação avançados, com gestão da fase, pelo que foi estudado experimentalmente e por simulação o impacto da utilização destes formatos no desempenho do MZI-SOA na conversão de comprimento de onda e formato, em várias condições de operação. Foram derivadas regras de utilização para funcionamento óptimo. Foi também estudado o impacto da forma dos pulsos do sinal no desempenho do dispositivo. De seguida, o MZI-SOA foi utilizado para realizar funcionalidades temporais ao nível do bit e do pacote. Foi investigada a operação de um conversor de multiplexagem por divisão no comprimento de onda para multiplexagem por divisão temporal óptica, experimentalmente e por simulação, e de um compressor e descompressor de pacotes, por simulação. Para este último, foi investigada a operação com o MZI-SOA baseado em amplificadores ópticos de semicondutor com geometria de poço quântico e ponto quântico. Foi também realizado experimentalmente um ermutador de intervalos temporais que explora o MZI-SOA como conversor de comprimento de onda e usa um banco de linhas de atraso ópticas para introduzir no sinal um atraso seleccionável. Por fim, foi estudado analiticamente, experimentalmente e por simulação o impacto de diafonia em redes ópticas em diversas situações. Extendeu-se um modelo analítico de cálculo de desempenho para contemplar sinais distorcidos e afectados por diafonia. Estudou-se o caso de sinais muito filtrados e afectados por diafonia e mostrou-se que, para determinar correctamente as penalidades que ocorrem, ambos os efeitos devem ser considerados simultaneamente e não em separado. Foi estudada a escalabilidade limitada por diafonia de um comutador de intervalos temporais baseado em MZI-SOA a operar como comutador espacial. Mostrou-se também que sinais afectados fortemente por não-linearidades podem causar penalidades de diafonia mais elevadas do que sinais não afectados por não-linearidades. Neste trabalho foi demonstrado que o MZI-SOA permite construir vários e pertinentes circuitos ópticos, funcionando como bloco fundamental de construção, tendo sido o seu desempenho analisado, desde o nível de componente até ao nível de sistema. Tendo em conta as vantagens e desvantagens do MZI-SOA e os desenvolvimentos recentes de outras tecnologias, foram sugeridos tópicos de investigação com o intuito de evoluir para as redes ópticas de nova geração.
Resumo:
O objeto principal desta tese é o estudo de algoritmos de processamento e representação automáticos de dados, em particular de informação obtida por sensores montados a bordo de veículos (2D e 3D), com aplicação em contexto de sistemas de apoio à condução. O trabalho foca alguns dos problemas que, quer os sistemas de condução automática (AD), quer os sistemas avançados de apoio à condução (ADAS), enfrentam hoje em dia. O documento é composto por duas partes. A primeira descreve o projeto, construção e desenvolvimento de três protótipos robóticos, incluindo pormenores associados aos sensores montados a bordo dos robôs, algoritmos e arquitecturas de software. Estes robôs foram utilizados como plataformas de ensaios para testar e validar as técnicas propostas. Para além disso, participaram em várias competições de condução autónoma tendo obtido muito bons resultados. A segunda parte deste documento apresenta vários algoritmos empregues na geração de representações intermédias de dados sensoriais. Estes podem ser utilizados para melhorar técnicas já existentes de reconhecimento de padrões, deteção ou navegação, e por este meio contribuir para futuras aplicações no âmbito dos AD ou ADAS. Dado que os veículos autónomos contêm uma grande quantidade de sensores de diferentes naturezas, representações intermédias são particularmente adequadas, pois podem lidar com problemas relacionados com as diversas naturezas dos dados (2D, 3D, fotométrica, etc.), com o carácter assíncrono dos dados (multiplos sensores a enviar dados a diferentes frequências), ou com o alinhamento dos dados (problemas de calibração, diferentes sensores a disponibilizar diferentes medições para um mesmo objeto). Neste âmbito, são propostas novas técnicas para a computação de uma representação multi-câmara multi-modal de transformação de perspectiva inversa, para a execução de correcção de côr entre imagens de forma a obter mosaicos de qualidade, ou para a geração de uma representação de cena baseada em primitivas poligonais, capaz de lidar com grandes quantidades de dados 3D e 2D, tendo inclusivamente a capacidade de refinar a representação à medida que novos dados sensoriais são recebidos.