993 resultados para RAY-TRACING ALGORITHM
Resumo:
Purpose: To investigate whether standard X-ray acquisition factors for orbital radiographs are suitable for the detection of ferromagnetic intra-ocular foreign bodies in patients undergoing MRI. Method: 35 observers, at varied levels of education in radiography, attending a European Dose Optimisation EURASMUS Summer School were asked to score 24 images of varying acquisition factors against a clinical standard (reference image) using two alternative forced choice. The observers were provided with 12 questions and a 5 point Likert scale. Statistical tests were used to validate the scale, and scale reliability was also measured. The images which scored equal to, or better than, the reference image (36) were ranked alongside their corresponding effective dose (E), the image with the lowest dose equal to or better than the reference is considered the new optimum acquisition factors. Results: Four images emerged as equal to, or better than, the reference in terms of image quality. The images were then ranked in order of E. Only one image that scored the same as the reference had a lower dose. The reference image had a mean E of 3.31μSv, the image that scored the same had an E of 1.8μSv. Conclusion: Against the current clinical standard exposure factors of 70kVp, 20mAs and the use of an anti- scatter grid, one image proved to have a lower E whilst maintaining the same level of image quality and lesion visibility. It is suggested that the new exposure factors should be 60kVp, 20mAs and still include the use of an anti-scatter grid.
Resumo:
Several phenomena present in electrical systems motivated the development of comprehensive models based on the theory of fractional calculus (FC). Bearing these ideas in mind, in this work are applied the FC concepts to define, and to evaluate, the electrical potential of fractional order, based in a genetic algorithm optimization scheme. The feasibility and the convergence of the proposed method are evaluated.
Resumo:
This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP). The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm. The heuristic generates parameterized active schedules. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
This work addresses the signal propagation and the fractional-order dynamics during the evolution of a genetic algorithm (GA). In order to investigate the phenomena involved in the GA population evolution, the mutation is exposed to excitation perturbations during some generations and the corresponding fitness variations are evaluated. Three distinct fitness functions are used to study their influence in the GA dynamics. The input and output signals are studied revealing a fractional-order dynamic evolution, characteristic of a long-term system memory.
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
Naturally occurring radioactive materials (NORM) under certain conditions can reach hazardous radiological levels contributing to an additional exposure dose to ionizing radiation. Most environmental concerns are associated with uranium mining and milling sites, but the same concerns should be addressed to natural near surface occurrences of uranium as well as man-made sources such as technologically enhanced naturally occurring radioactive materials (TENORM) resulting from phosphates industry, ceramic industry and energy production activities, in particular from coal-fired power plants which is one of the major sources of increased exposure to man from enhanced naturally occurring materials. This work describes the methodology developed to assess the environmental radiation by in situ gamma spectrometry in the vicinity of a Portuguese coal fired power plant. The current investigation is part of a research project that is undergoing in the vicinity of Sines Coal-Fired Power Plant (south of Portugal) until the end of 2013.
Resumo:
Acta Crystallographica F64 (2008) 636-638
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
This paper presents a step count algorithm designed to work in real-time using low computational power. This proposal is our first step for the development of an indoor navigation system, based on Pedestrian Dead Reckoning (PDR). We present two approaches to solve this problem and compare them based in their error on step counting, as well as, the capability of their use in a real time system.