907 resultados para computationally efficient algorithm
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
In search of an efficient but simple, low cost procedure for the serodiagnosis of Toxoplasmosis, especially suited for routine laboratories facing technical and budget limitations as in less developed countries, the diagnostic capability of Hematoxo® , an hemagglutination test for toxoplasmosis, was evaluated in relation to a battery of tests including IgG- and IgM-immunofluorescence tests, hemagglutination and an IgM-capture enzymatic assay. Detecting a little as 5 I.U. of IgG antitoxoplasma antibodies, Hematoxo® showed a straight agreement as to reactivity and non-reactivity for the 443 non-reactive and the 387 reactive serum samples, included in this study. In 23 cases presenting a serological pattern of acute toxoplasmosis and showing IgM antibodies, Hematoxo® could detect IgM antibodies in 18, indicated by negativation or a significant decrease in titers as a result of treating samples with 2-mercapto-ethanol. However, a neat increase in sensitivity for IgM specific antibodies could be achieved by previously removing IgG from the sample, as demonstrated in a series of acute toxoplasmosis sera. A simple procedure was developed for this purpose, by reconstituting a lyophilized suspension of Protein A - rich Staphylococcus with the lowest serum dilution to be tested. Of low cost and easy to perform, Hematoxo® affords not only a practical qualitative procedure for screening reactors and non-reactors, as in prenatal services, but also quantitative assays that permit to titrate antibodies as well as to identify IgM antibodies.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
The need to increase agricultural yield led, among others, to an increase in the consumption of nitrogen based fertilizers. As a consequence, there are excessive concentrations of nitrates, the most abundant of the reactive nitrogen (Nr) species, in several areas of the world. The demographic changes and projected population growth for the next decades, and the economic shifts which are already shaping the near future are powerful drivers for a further intensification in the use of fertilizers, with a predicted increase of the nitrogen loads in soils. Nitrate easily diffuses in the subsurface environments, portraying high mobility in soils. Moreover, the presence of high nitrate loads in water has the potential to cause an array of health dysfunctions, such as methemoglobinemia and several cancers. Permeable Reactive Barriers (PRB) placed strategically relatively to the nitrate source constitute an effective technology to tackle nitrate pollution. Ergo, PRB avoid various adverse impacts resulting from the displacement of reactive nitrogen downstream along water bodies. A four stages literature review was carried out in 34 databases. Initially, a set of pertinent key words were identified to perform the initial databases searches. Then, the synonyms of those initial key words were used to carry out a second set of databases searches. The third stage comprised the identification of other additional relevant terms from the research papers identified in the previous two stages. Again, databases searches were performed with this third set of key words. The final step consisted of the identification of relevant papers from the bibliography of the relevant papers identified in the previous three stages of the literature review process. The set of papers identified as relevant for in-depth analysis were assessed considering a set of relevant characterization variables.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
In this work, we present the explicit series solution of a specific mathematical model from the literature, the Deng bursting model, that mimics the glucose-induced electrical activity of pancreatic beta-cells (Deng, 1993). To serve to this purpose, we use a technique developed to find analytic approximate solutions for strongly nonlinear problems. This analytical algorithm involves an auxiliary parameter which provides us with an efficient way to ensure the rapid and accurate convergence to the exact solution of the bursting model. By using the homotopy solution, we investigate the dynamical effect of a biologically meaningful bifurcation parameter rho, which increases with the glucose concentration. Our analytical results are found to be in excellent agreement with the numerical ones. This work provides an illustration of how our understanding of biophysically motivated models can be directly enhanced by the application of a newly analytic method.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
This paper presents a step count algorithm designed to work in real-time using low computational power. This proposal is our first step for the development of an indoor navigation system, based on Pedestrian Dead Reckoning (PDR). We present two approaches to solve this problem and compare them based in their error on step counting, as well as, the capability of their use in a real time system.
Resumo:
This paper presents an ankle mounted Inertial Navigation System (INS) used to estimate the distance traveled by a pedestrian. This distance is estimated by the number of steps given by the user. The proposed method is based on force sensors to enhance the results obtained from an INS. Experimental results have shown that, depending on the step frequency, the traveled distance error varies between 2.7% and 5.6%.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
Mestrado em Engenharia Mecânica – Gestão Industrial