971 resultados para Computer Reading Program
Resumo:
Dynamic analysis is an increasingly important means of supporting software validation and maintenance. To date, developers of dynamic analyses have used low-level instrumentation and debug interfaces to realize their analyses. Many dynamic analyses, however, share multiple common high-level requirements, e.g., capture of program data state as well as events, and efficient and accurate event capture in the presence of threading. We present SOFYA – an infra-structure designed to provide high-level, efficient, concurrency-aware support for building analyses that reason about rich observations of program data and events. It provides a layered, modular architecture, which has been successfully used to rapidly develop and evaluate a variety of demanding dynamic program analyses. In this paper, we describe the SOFYA framework, the challenges it addresses, and survey several such analyses.
Resumo:
We enacted a bill in Ohio this year, Senate Bill 445, that has to do with the application of pesticides. It is a very wide bill as you would normally look at it with most of the meat going to come from the regulations that are presently being written into it. In other words, the framework was developed and accepted by the two houses in our state legislature and empowered the Director of Agriculture to establish the regulations or the so-called teeth to this bill. The governor signed the bill in June and it became effective in September. The committees as of this time are meeting to develop philosophies and regulations that will be promulgated and brought into hearings and sifted through, and eventually, with a target date of December of this year, (1970), brought to the Director of Agriculture's office for acceptance. There is a committee established for rodent and bird control which is very well represented by our industry here in Ohio. John Beck (Rose Exterminator Company) is the chairman of the committee, William B. Jackson (Bowling Green State University) and Robert Yaeger (Cincinnati) are also on the committee. The important feature of this new law, in terms of pest control operators, is the examinations that will be required. We operators and our service people will both be tested and licensed, if sufficient proficiency is demonstrated on the tests. For your information they use a little different terminology in the bill than we in the industry normally use. We think of an applicator in the industry as service people. In the bill an applicator is defined as an operator. Therefore in reading the law the word operator means the man who does the job, the service man. Just the reverse is true in the industry. We think of the operator as the man who owns or manages the company while these people are referred to in the bill as applicators. The Bill calls for the development of schools for the training of our people throughout the state. Those of us who are in bird control should begin to prepare ourselves to meet this request, to be available for the schooling, have our people available for the schooling, and give this program all the co-operation that we can.
Resumo:
We review recent visualization techniques aimed at supporting tasks that require the analysis of text documents, from approaches targeted at visually summarizing the relevant content of a single document to those aimed at assisting exploratory investigation of whole collections of documents.Techniques are organized considering their target input materialeither single texts or collections of textsand their focus, which may be at displaying content, emphasizing relevant relationships, highlighting the temporal evolution of a document or collection, or helping users to handle results from a query posed to a search engine.We describe the approaches adopted by distinct techniques and briefly review the strategies they employ to obtain meaningful text models, discuss how they extract the information required to produce representative visualizations, the tasks they intend to support and the interaction issues involved, and strengths and limitations. Finally, we show a summary of techniques, highlighting their goals and distinguishing characteristics. We also briefly discuss some open problems and research directions in the fields of visual text mining and text analytics.
Resumo:
This qualitative, exploratory, descriptive study was performed with the objective of understanding the perception of the nurses working in medical-surgical units of a university hospital, regarding the strategies developed to perform a pilot test of the PROCEnf-USP electronic system, with the purpose of computerizing clinical nursing documentation. Eleven nurses of a theoretical-practical training program were interviewed and the obtained data were analyzed using the Content Analysis Technique. The following categories were discussed based on the references of participative management and planned changes: favorable aspects for the implementation; unfavorable aspects for the implementation; and expectations regarding the implementation. According to the nurses' perceptions, the preliminary use of the electronic system allowed them to show their potential and to propose improvements, encouraging them to become partners of the group manager in the dissemination to other nurses of the institution.
Resumo:
CONTEXTUALIZAÇÃO: A biofotogrametria é uma técnica difundida na área da saúde e, apesar dos cuidados metodológicos, há distorções nas leituras angulares das imagens fotográficas. OBJETIVO: Mensurar o erro das medidas angulares em imagens fotográficas com diferentes resoluções digitais em um objeto com ângulos pré-demarcados. MÉTODOS: Utilizou-se uma esfera de borracha com 52 cm de circunferência. O objeto foi previamente demarcado com ângulos de 10º, 30º, 60º e 90º, e os registros fotográficos foram realizados com o eixo focal da câmera a três metros de distância e perpendicular ao objeto, sem utilização de zoom óptico e com resolução de 3, 5 e 10 Megapixels (Mp). Todos os registros fotográficos foram armazenados, e os valores angulares foram analisados por um experimentador previamente treinado, utilizando o programa ImageJ. As aferições das medidas foram realizadas duas vezes, com intervalo de 15 dias entre elas. Posteriormente, foram calculados os valores de acurácia, erro relativo e em graus, precisão e Coeficiente de Correlação Intraclasse (ICC). RESULTADOS: Quando analisado o ângulo de 10º, a média da acurácia das medidas foi maior para os registros com resolução de 3 Mp em relação às resoluções de 5 e 10 Mp. O ICC foi considerado excelente para as três resoluções de imagem analisadas e, em relação aos ângulos analisados nos registros fotográficos, pôde-se verificar maior acurácia, menor erro relativo e em graus e maior precisão para o ângulo de 90º, independentemente da resolução da imagem. CONCLUSÃO: Os registros fotográficos realizados com a resolução de 3 Mp proporcionaram medidas de maiores valores de acurácia e precisão e menores valores de erro, sugerindo ser a resolução mais adequada para gerar imagem de ângulos de 10º e 30º.
Resumo:
Field-Programmable Gate Arrays (FPGAs) are becoming increasingly important in embedded and high-performance computing systems. They allow performance levels close to the ones obtained with Application-Specific Integrated Circuits, while still keeping design and implementation flexibility. However, to efficiently program FPGAs, one needs the expertise of hardware developers in order to master hardware description languages (HDLs) such as VHDL or Verilog. Attempts to furnish a high-level compilation flow (e.g., from C programs) still have to address open issues before broader efficient results can be obtained. Bearing in mind an FPGA available resources, it has been developed LALP (Language for Aggressive Loop Pipelining), a novel language to program FPGA-based accelerators, and its compilation framework, including mapping capabilities. The main ideas behind LALP are to provide a higher abstraction level than HDLs, to exploit the intrinsic parallelism of hardware resources, and to allow the programmer to control execution stages whenever the compiler techniques are unable to generate efficient implementations. Those features are particularly useful to implement loop pipelining, a well regarded technique used to accelerate computations in several application domains. This paper describes LALP, and shows how it can be used to achieve high-performance computing solutions.
Resumo:
[ES] El Detector de Efectos Stroop (SED - Stroop Effect Detector), es una herramienta informática de asistencia, desarrollada a través del programa de investigación de Desarrollo Tecnológico Social de la Universidad de Las Palmas de Gran Canaria, que ayuda a profesionales del sector neuropsicológico a identificar problemas en la corteza orbitofrontal de un individuo, usándose para ello la técnica ideada por Schenker en 1998. Como base metodológica, se han utilizado los conocimientos adquiridos en las diferentes materias de la adaptación al grado en Ingeniería Informática como Gestión del Software, Arquitectura del Software y Desarrollo de Interfaces de Usuario así como conocimiento adquirido con anterioridad en asignaturas de Programación e Ingeniería del Software I y II. Como para realizar este proyecto sólo el conocimiento informático no era suficiente, he realizado una labor de investigación acerca del problema, teniendo que recopilar información de otros documentos científicos que abordan el tema, consultas a profesionales del sector como son el Doctor Don Ayoze Nauzet González Hernández, neurólogo del hospital Doctor Negrín de Las Palmas de Gran Canaria y el psicólogo Don José Manuel Rodríguez Pellejero que habló de este problema en clase del máster de Formación del Profesorado y que actualmente estoy cursando. Este trabajo presenta el test de Stroop con las dos versiones de Schenker: RCN (Reading Color Names) y NCW (Naming Colored Words). Como norma general, ambas pruebas presentan ante los sujetos estudios palabras (nombres de colores) escritas con la tinta de colores diferentes. De esta forma, el RCN consiste en leer la palabra escrita omitiendo la tonalidad de su fuente e intentando que no nos influya. Por el contrario, el NCW requiere enunciar el nombre del color de la tinta con la que está escrita la palabra sin que nos influya que ésta última sea el nombre de un color.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
This work presents algorithms for the calculation of the electrostatic interaction in partially periodic systems. The framework for these algorithms is provided by the simulation package ESPResSo, of which the author was one of the main developers. The prominent features of the program are listed and the internal structure is described. In the following, algorithms for the calculation of the Coulomb sum in three dimensionally periodic systems are described. These methods are the foundations for the algorithms for partially periodic systems presented in this work. Starting from the MMM2D method for systems with one non-periodic coordinate, the ELC method for these systems is developed. This method consists of a correction term which allows to use methods for three dimensional periodicity also for the case of two periodic coordinates. The computation time of this correction term is neglible for large numbers of particles. The performance of MMM2D and ELC are demonstrated by results from the implementations contained in ESPResSo. It is also discussed, how different dielectric constants inside and outside of the simulation box can be realized. For systems with one periodic coordinate, the MMM1D method is derived from the MMM2D method. This method is applied to the problem of the attraction of like-charged rods in the presence of counterions, and results of the strong coupling theory for the equilibrium distance of the rods at infinite counterion-coupling are checked against results from computer simulations. The degree of agreement between the simulations at finite coupling and the theory can be characterized by a single parameter gamma_RB. In the special case of T=0, one finds under certain circumstances flat configurations, in which all charges are located in the rod-rod plane. The energetically optimal configuration and its stability are determined analytically, which depends on only one parameter gamma_z, similar to gamma_RB. These findings are in good agreement with results from computer simulations.
Resumo:
Lint-like program checkers are popular tools that ensure code quality by verifying compliance with best practices for a particular programming language. The proliferation of internal domain-specific languages and models, however, poses new challenges for such tools. Traditional program checkers produce many false positives and fail to accurately check constraints, best practices, common errors, possible optimizations and portability issues particular to domain-specific languages. We advocate the use of dedicated rules to check domain-specific practices. We demonstrate the implementation of domain-specific rules, the automatic fixing of violations, and their application to two case-studies: (1) Seaside defines several internal DSLs through a creative use of the syntax of the host language; and (2) Magritte adds meta-descriptions to existing code by means of special methods. Our empirical validation demonstrates that domain-specific program checking significantly improves code quality when compared with general purpose program checking.
Resumo:
The new knowledge environments of the digital age are oen described as places where we are all closely read, with our buying habits, location, and identities available to advertisers, online merchants, the government, and others through our use of the Internet. This is represented as a loss of privacy in which these entities learn about our activities and desires, using means that were unavailable in the pre-digital era. This article argues that the reciprocal nature of digital networks means 1) that the privacy issues that we face online are not radically different from those of the pre-Internet era, and 2) that we need to reconceive of close reading as an activity of which both humans and computer algorithms are capable.
Resumo:
We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.
Resumo:
BACKGROUND: Faculties face the permanent challenge to design training programs with well-balanced educational outcomes, and to offer various organised and individual learning opportunities. AIM: To apply our original model to a postgraduate training program in rheumatology in general, and to various learning experiences in particular, in order to analyse the balance between different educational objectives. METHODS: Learning times of various educational activities were reported by the junior staff as targeted learners. The suitability of different learning experiences to achieve cognitive, affective and psychomotor learning objectives was estimated. Learning points with respect to efficacy were calculated by multiplication of the estimated learning times by the perceived appropriateness of the educational strategies. RESULTS: Out of 780 hours of professional learning per year (17.7 hours/week), 37.7% of the time was spent under individual supervision of senior staff, 24.4% in organised structured learning, 22.6% in self-studies, and 15.3% in organised patient-oriented learning. The balance between the different types of learning objectives was appropriate for the overall program, but not for each particular learning experience. Acquisition of factual knowledge and problem solving was readily aimed for during organised teaching sessions of different formats, and by personal targeted reading. Attitudes, skills and competencies, as well as behavioural and performance changes were mostly learned during caring for patients under interactive supervision by experts. CONCLUSION: We encourage other faculties to apply this approach to any other curriculum of undergraduate education, postgraduate training or continuous professional development in order to foster the development of well-balanced learning experiences.