990 resultados para easy


Relevância:

10.00% 10.00%

Publicador:

Resumo:

超分辨技术中现有的纯振幅型或纯相位型光瞳滤波器,大部分只能实现轴向或横向超分辨而不能实现三维超分辨,三维超分辨在三维成像系统中有着重要的作用。因此为提高成像系统中的三维分辨能力,设计了一种复振幅光瞳滤波器,并对其三维超分辨性能进行了研究。详细分析了该光瞳滤波器的第一区半径和透射率对施特雷尔比、轴向和横向超分辨因子以及旁瓣能量的影响。通过一系列的模拟证明,借助于复振幅光瞳滤波器可以实现三维超分辨。该滤波器的优点是容易实现三维超分辨,且有比较高的施特雷尔比,缺点是三维超分辨的实现总是伴随着旁瓣能量的增加,但可

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diferentes organizações públicas e privadas coletam e disponibilizam uma massa de dados sobre a realidade sócio-econômica das diferentes nações. Há hoje, da parte do governo brasileiro, um interesse manifesto de divulgar uma gama diferenciada de informações para os mais diversos perfis de usuários. Persiste, contudo, uma série de limitações para uma divulgação mais massiva e democrática, entre elas, a heterogeneidade das fontes de dados, sua dispersão e formato de apresentação pouco amigável. Devido à complexidade inerente à informação geográfica envolvida, que produz incompatibilidade em vários níveis, o intercâmbio de dados em sistemas de informação geográfica não é problema trivial. Para aplicações desenvolvidas para a Web, uma solução são os Web Services que permitem que novas aplicações possam interagir com aquelas que já existem e que sistemas desenvolvidos em plataformas diferentes sejam compatíveis. Neste sentido, o objetivo do trabalho é mostrar as possibilidades de construção de portais usando software livre, a tecnologia dos Web Services e os padrões do Open Geospatial Consortium (OGC) para a disseminação de dados espaciais. Visando avaliar e testar as tecnologias selecionadas e comprovar sua efetividade foi desenvolvido um exemplo de portal de dados sócio-econômicos, compreendendo informações de um servidor local e de servidores remotos. As contribuições do trabalho são a disponibilização de mapas dinâmicos, a geração de mapas através da composição de mapas disponibilizados em servidores remotos e local e o uso do padrão OGC WMC. Analisando o protótipo de portal construído, verifica-se, contudo, que a localização e requisição de Web Services não são tarefas fáceis para um usuário típico da Internet. Nesta direção, os trabalhos futuros no domínio dos portais de informação geográfica poderiam adotar a tecnologia Representational State Transfer (REST).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method using two prisms for measurement of small dynamic angles is proposed in which the measurement is based on a simple tangent equation and a phase-modulating interferometer with a laser diode to measure dynamic optical path differences with higher accuracy. Owing to the simple tangent equation, the symmetry requirement on the two prisms in the optical configuration is eliminated, and easy measurement of the separations between two parallel beams with a position-sensitive detector is achieved. Small-dynamic-angle measurements are experimentally demonstrated with high accuracy. (C) 2007 Society of Photo-Optical Instrumentation Engineers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

提出了一种基于达曼光栅的动态光耦合器,通过控制装置中达曼光栅位移参量,可实现入射光束的分束或合束以及两者之间的动态转换。适当选择达曼光栅类型可实现任意N×M的动态光耦合。实验中以1550 nm光波长为例,对1×8达曼动态光耦合器进行测量,测得其实现光开关功能时插入损耗为0.43 dB,实现光分束功能时均匀性达到0.03,单路插入损耗均值为10.5 dB。该实验装置易于调节、体积小、能耗低,且关键元件达曼光栅制作工艺成熟,易于批量化生产。特别是在实现中大规模光交换阵列时,该方案就具有更明显的优越性,有实用意

Relevância:

10.00% 10.00%

Publicador:

Resumo:

在一种已有的角位移干涉测量技术的基础上,提出一种改进的角位移测量方法。通过选择合适的初始入射角,使从平板前后表面反射的两光束实现剪切干涉。采用一维位置探测器测量光束经透镜会聚后在探测器光敏面上的光点偏移量。根据干涉信号的相位和光点偏移量可以计算出被测物体的角位移。在该测量方案中,引入的一平面反射镜与被测物体的反射面形成光程差放大系统,提高了角位移测量灵敏度。分析了初始入射角对剪切比的影响,并讨论了基于该方案的角位移测量精度。实验结果表明,基于该技术的角位移重复测量精度达到10-8 rad数量级。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Both chemical and biological methods are used to assess the water quality of rivers. Many standard physical and chemical methods are now established, but biological procedures of comparable accuracy and versatility are still lacking. This is unfortunate because the biological assessment of water quality has several advantages over physical and chemical analyses. Several groups of organisms have been used to assess water quality in rivers and these include Bacteria, Protozoa, Algae, macrophytes, macroinvertebrates and fish. Hellawell (1978) provides an excellent review of the advantages and disadvantages of these groups, and concludes that macroinvertebrates are the most useful for monitoring water quality. Although macroinvertebrates are relatively easy to sample in shallow water (depth < 1m), quantitative sampling poses more problems than qualitative sampling because a large number of replicate sampling units are usually required for accurate estimates of numbers or biomass per unit area. Both qualitative and quantitative sampling are difficult in deep water (depth > 1m). The present paper first considers different types of samplers with emphasis on immediate samplers, and then discusses some problems in choosing a suitable sampler for benthic macroinvertebrates in deep rivers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

设计了一种单块晶体集成的n×n纵横开关(Crossbar)网络。通过综合考虑晶体的双折射和全内双反射现象,以及晶体的电光效应,将构成n×n纵横开关网络的所有单元器件都集成到一块具有电光效应的双折射晶体上。同时,给出了该网络的控制算法,通过对开关工作状态的控制,可以实现任意输入输出通道之间的无阻塞连接。这种单块晶体集成的纵横开关网络具有能量损耗低、无阻塞、易安装、抗干扰等优点,适合于全光网络发展的需要。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The usual beam splitter of multilayer-coated film with a wideband spectrum is not easy to achieve. We describe the realization of a wideband transmission two-port beam splitter based on a binary fused-silica phase grating. To achieve high efficiency and equality in the diffracted 0th and -1st orders, the grating profile parameters are optimized using rigorous coupled-wave analysis at a wavelength of 1550 nm. Holographic recording and the inductively coupled plasma dry etching technique are used to fabricate the fused-silica beam splitter grating. The measured efficiency of (45% x 2) = 90% diffracted into the both orders can be obtained with the fabricated grating under Littrow mounting. The physical mechanism of such a wideband two-port beam splitter grating can be well explained by the modal method based on two-beam interference of the modes excited by the incident wave. With the high damage threshold, low coefficient of thermal expansion, and wideband high efficiency, the presented beam splitter etched in fused silica should be a useful optical element for a variety of practical applications. (C) 2008 Optical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the design and characterization of a fiber Fabry-Perot interferometer (FFPI) acoustic wave detector with its Q point being stabilized actively. The relationship between the reflectivity of the F-P cavity facets and cavity length was theoretically analyzed, and high visibility of 100% was realized by optimized design of the F-P cavity. To prevent the drifting of the Q point, a new stabilization method by actively feedback controlling of the diode laser is proposed and demonstrated, indicating the method is simple and easy operating. Measurement shows that good tracing of Q point was effectively realized. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O comportamento espacial dos indivíduos é um componente chave para se entender a dinâmica de população dos organismos e esclarecer o potencial de migração e dispersão das espécies. Vários fatores afetam a atividade de locomoção de moluscos terrestres, como temperatura, luz, umidade, época do ano, tamanho da concha, sexo, estratégia reprodutiva, idade, densidade de coespecíficos e disponibilidade de alimento. Um dos métodos usados para estudar deslocamento de gastrópodes terrestres é o de marcação-recaptura. Gastrópodes terrestres se prestam a este tipo de estudo por causa de (1) seu reduzido tamanho, (2) fácil manejo, (3) fácil captura e (4) pequenas distâncias de deslocamento e, consequentemente, reduzidas áreas de vida. Estes organismos servem como modelo para o estudo de ecologia espacial e dispersão. Estudos de população, investigando o uso do espaço, a distribuição espacial, a densidade populacional e a área de vida são escassos para moluscos terrestres e ainda mais raros em áreas naturais tropicais. Nosso objeto de estudo é Hypselartemon contusulus (Férussac, 1827), um molusco terrestre carnívoro, da família Streptaxidae, muito abundante na serrapilheira, em trechos planos de mata secundária na Trilha da Parnaioca, Ilha Grande, Rio de Janeiro. A espécie é endêmica para o estado do Rio de Janeiro. Seu tamanho é de até 7,2 mm de altura, apresentando 6 a 7 voltas. Neste trabalho estudamos as variáveis temperatura ambiente, temperatura do solo, umidade do ar, luminosidade, profundidade do folhiço, tamanho do animal, densidade de co-específicos e densidade de presas, relacionando estes dados ecológicos ao deslocamento observado em Hypselartemon contusulus. Uma das hipóteses de trabalho é que estas variáveis afetam seu deslocamento. O trabalho foi realizado na Ilha Grande, situada ao sul do Estado do Rio de Janeiro, no município de Angra dos Reis. Os animais foram capturados e marcados com um código individual pintado na concha com corretor ortográfico líquido e caneta nanquim. As distâncias de deslocamento, em cm, foram registradas medindo-se as distâncias entre marcadores subsequentes. Os resultados encontrados indicam que o método utilizado é eficaz para marcar individualmente Hypselartemon contusulus em estudos de médio prazo (até nove meses). Sugerimos o uso deste método de marcação para estudos com gastrópodes terrestres ameaçados de extinção, como algumas espécies das famílias Bulimulidae, Megalobulimidae, Streptaxidae e Strophocheilidae. Hypselartemon contusulus não mantém uma distância mínima de seus vizinhos, é ativo ao longo de todo o ano e ao longo do dia, demonstrando atividade de locomoção e predação. Não foram encontrados animais abrigados sob pedra ou madeira morta. Não foram observados locais de atividade em oposição a lugares de repouso/abrigo. Beckianum beckianum (Pfeiffer, 1846) foi a presa preferencial. A densidade populacional variou de 0,57 a 1,2 indivíduos/m2 entre as campanhas de coleta. A espécie desloca-se, em média, 26,57 17,07 cm/24h, na Trilha da Parnaioca, Ilha Grande. A área de vida de H. contusulus é pequena, sendo de, no máximo, 0,48 m2 em três dias e 3,64 m2 em 79 dias. O deslocamento da espécie variou ao longo do ano, mas esta variação não é afetada pelas variáveis ecológicas estudadas. Este é, portanto, um comportamento plástico em H. contusulus e, provavelmente, controlado por fatores endógenos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. Results: We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. Conclusions: We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este Trabajo de Fin de Grado desarrollado en la empresa On4U, se ha implementado un módulo para Magento, cuya función principal es la generación dinámica de parrillas de productos en base al análisis del tiempo meteorológico, teniendo en cuenta la localización del cliente. Además, el módulo guarda automáticamente las compras efectuadas, junto con la información externa, para un posible análisis posterior que relacione los hábitos de compra con el tiempo meteorológico. Aunque se haya centrado en este caso de uso, se ha desarrollado con un enfoque modular, de tal manera que fuese fácil de integrar en el módulo el uso de otra fuente abierta de información. Para poder realizar el proyecto, se ha tenido que profundizar en varios conceptos relacionados con la plataforma de eCommerce Magento, entre ellos, el patrón Modelo-Vista-Controlador y el ciclo de vida de una petición.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho apresenta uma arquitetura geral para evolução de circuitos eletrônicos analógicos baseada em algoritmos genéticos. A organização lógica privilegia a interoperabilidade de seus principais componentes, incluindo a possibilidade de substituição ou melhorias internas de suas funcionalidades. A plataforma implementada utiliza evolução extrínseca, isto é, baseada em simulação de circuitos, e visa facilidade e flexibilidade para experimentação. Ela viabiliza a interconexão de diversos componentes aos nós de um circuito eletrônico que será sintetizado ou adaptado. A técnica de Algoritmos Genéticos é usada para buscar a melhor forma de interconectar os componentes para implementar a função desejada. Esta versão da plataforma utiliza o ambiente MATLAB com um toolbox de Algoritmos Genéticos e o PSpice como simulador de circuitos. Os estudos de caso realizados apresentaram resultados que demonstram a potencialidade da plataforma no desenvolvimento de circuitos eletrônicos adaptativos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: In recent years Galaxy has become a popular workflow management system in bioinformatics, due to its ease of installation, use and extension. The availability of Semantic Web-oriented tools in Galaxy, however, is limited. This is also the case for Semantic Web Services such as those provided by the SADI project, i.e. services that consume and produce RDF. Here we present SADI-Galaxy, a tool generator that deploys selected SADI Services as typical Galaxy tools. Results: SADI-Galaxy is a Galaxy tool generator: through SADI-Galaxy, any SADI-compliant service becomes a Galaxy tool that can participate in other out-standing features of Galaxy such as data storage, history, workflow creation, and publication. Galaxy can also be used to execute and combine SADI services as it does with other Galaxy tools. Finally, we have semi-automated the packing and unpacking of data into RDF such that other Galaxy tools can easily be combined with SADI services, plugging the rich SADI Semantic Web Service environment into the popular Galaxy ecosystem. Conclusions: SADI-Galaxy bridges the gap between Galaxy, an easy to use but "static" workflow system with a wide user-base, and SADI, a sophisticated, semantic, discovery-based framework for Web Services, thus benefiting both user communities.