6 resultados para Computer Reading Program
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Aiming to compare the effect of different light sources for dental bleaching on vascular permeability of dental pulps, forty-eight incisors were used. The bleaching agent (35% hydrogen peroxide) was activated by halogen light; LED (Light Emitting Diode) or LED, followed by laser phototherapy (LPT) (lambda = 780 nm; 3 J/cm(2)). After the bleaching procedures, the animals received an intra-arterial dye injection and one hour later were sacrificed. The teeth were diaphanized and photographed. The amount of blue stain content of each dental pulp was quantified using a computer imaging program. The data was statistically compared (p <= 0.05). The results showed a significant higher (p <= 0.01) dye content in the groups bleached with halogen light, compared with the control, LED and LED plus LPT groups. Thus, tooth bleaching activated by LED or LED plus LPT induces lesser resulted in increased vascular permeability than halogen light.
Resumo:
Aiming to compare the effect of different light sources for dental bleaching on vascular permeability of dental pulps, forty-eight incisors were used. The bleaching agent (35 % hydrogen peroxide) was activated by halogen light; LED (Light Emitting Diode) or LED, followed by laser phototherapy (LPT) (λ = 780 nm; 3 J/cm²). After the bleaching procedures, the animals received an intra-arterial dye injection and one hour later were sacrificed. The teeth were diaphanized and photographed. The amount of blue stain content of each dental pulp was quantified using a computer imaging program. The data was statistically compared (p < 0.05). The results showed a significant higher (p < 0.01) dye content in the groups bleached with halogen light, compared with the control, LED and LED plus LPT groups. Thus, tooth bleaching activated by LED or LED plus LPT induces lesser resulted in increased vascular permeability than halogen light.
Resumo:
We review recent visualization techniques aimed at supporting tasks that require the analysis of text documents, from approaches targeted at visually summarizing the relevant content of a single document to those aimed at assisting exploratory investigation of whole collections of documents.Techniques are organized considering their target input materialeither single texts or collections of textsand their focus, which may be at displaying content, emphasizing relevant relationships, highlighting the temporal evolution of a document or collection, or helping users to handle results from a query posed to a search engine.We describe the approaches adopted by distinct techniques and briefly review the strategies they employ to obtain meaningful text models, discuss how they extract the information required to produce representative visualizations, the tasks they intend to support and the interaction issues involved, and strengths and limitations. Finally, we show a summary of techniques, highlighting their goals and distinguishing characteristics. We also briefly discuss some open problems and research directions in the fields of visual text mining and text analytics.
Resumo:
This qualitative, exploratory, descriptive study was performed with the objective of understanding the perception of the nurses working in medical-surgical units of a university hospital, regarding the strategies developed to perform a pilot test of the PROCEnf-USP electronic system, with the purpose of computerizing clinical nursing documentation. Eleven nurses of a theoretical-practical training program were interviewed and the obtained data were analyzed using the Content Analysis Technique. The following categories were discussed based on the references of participative management and planned changes: favorable aspects for the implementation; unfavorable aspects for the implementation; and expectations regarding the implementation. According to the nurses' perceptions, the preliminary use of the electronic system allowed them to show their potential and to propose improvements, encouraging them to become partners of the group manager in the dissemination to other nurses of the institution.
Resumo:
CONTEXTUALIZAÇÃO: A biofotogrametria é uma técnica difundida na área da saúde e, apesar dos cuidados metodológicos, há distorções nas leituras angulares das imagens fotográficas. OBJETIVO: Mensurar o erro das medidas angulares em imagens fotográficas com diferentes resoluções digitais em um objeto com ângulos pré-demarcados. MÉTODOS: Utilizou-se uma esfera de borracha com 52 cm de circunferência. O objeto foi previamente demarcado com ângulos de 10º, 30º, 60º e 90º, e os registros fotográficos foram realizados com o eixo focal da câmera a três metros de distância e perpendicular ao objeto, sem utilização de zoom óptico e com resolução de 3, 5 e 10 Megapixels (Mp). Todos os registros fotográficos foram armazenados, e os valores angulares foram analisados por um experimentador previamente treinado, utilizando o programa ImageJ. As aferições das medidas foram realizadas duas vezes, com intervalo de 15 dias entre elas. Posteriormente, foram calculados os valores de acurácia, erro relativo e em graus, precisão e Coeficiente de Correlação Intraclasse (ICC). RESULTADOS: Quando analisado o ângulo de 10º, a média da acurácia das medidas foi maior para os registros com resolução de 3 Mp em relação às resoluções de 5 e 10 Mp. O ICC foi considerado excelente para as três resoluções de imagem analisadas e, em relação aos ângulos analisados nos registros fotográficos, pôde-se verificar maior acurácia, menor erro relativo e em graus e maior precisão para o ângulo de 90º, independentemente da resolução da imagem. CONCLUSÃO: Os registros fotográficos realizados com a resolução de 3 Mp proporcionaram medidas de maiores valores de acurácia e precisão e menores valores de erro, sugerindo ser a resolução mais adequada para gerar imagem de ângulos de 10º e 30º.
Resumo:
Field-Programmable Gate Arrays (FPGAs) are becoming increasingly important in embedded and high-performance computing systems. They allow performance levels close to the ones obtained with Application-Specific Integrated Circuits, while still keeping design and implementation flexibility. However, to efficiently program FPGAs, one needs the expertise of hardware developers in order to master hardware description languages (HDLs) such as VHDL or Verilog. Attempts to furnish a high-level compilation flow (e.g., from C programs) still have to address open issues before broader efficient results can be obtained. Bearing in mind an FPGA available resources, it has been developed LALP (Language for Aggressive Loop Pipelining), a novel language to program FPGA-based accelerators, and its compilation framework, including mapping capabilities. The main ideas behind LALP are to provide a higher abstraction level than HDLs, to exploit the intrinsic parallelism of hardware resources, and to allow the programmer to control execution stages whenever the compiler techniques are unable to generate efficient implementations. Those features are particularly useful to implement loop pipelining, a well regarded technique used to accelerate computations in several application domains. This paper describes LALP, and shows how it can be used to achieve high-performance computing solutions.