989 resultados para Term Splitting Algorithm
Resumo:
Background & aims - Patients who underwent endoscopic gastrostomy (PEG) present protein-energy malnutrition, but little is known about Trace Elements (TE), Zinc (Zn), Copper (Cu), Selenium (Se), Iron (Fe), Chromium (Cr). Our aim was the evaluation of serum TE in patients who underwent PEG and its relationship with serum proteins, BMI and nature of underlying disorder. Methods - A prospective observational study was performed collecting: patient's age, gender, underlying disorder, NRS-2002, BMI, serum albumin, transferrin and TE concentration. We used ferrozine colorimetric method for Fe; Inductively Coupled Plasma-Atomic Emission Spectroscopy for Zn/Cu; Furnace Atomic Absorption Spectroscopy for Se/Cr. The patients were divided into head and neck cancer (HNC) and neurological dysphagia (ND). Results - 146 patients (89 males), 21–95 years: HNC-56; ND-90. Low BMI in 78. Low values mostly for Zn (n = 122) and Fe (n = 69), but less for Se (n = 31), Cu (n = 16), Cr (n = 7); low albumin in 77, low transferrin in 94 and 66 with both proteins low. Significant differences between the groups of underlying disease only for Zn (t140.326 = −2,642, p < 0.01) and a correlation between proteins and TE respectively albumin and Zn (r = 0.197, p = 0.025), and albumin and Fe (r = 0.415, p = 0.000). Conclusions - When gastrostomy was performed, patients display low serum TE namely Zn, but also Fe, less striking regarding others TE. It was related with prolonged fasting, whatever the underlying disease. Low proteins were associated with low TE. Teams taking care of PEG-patients should use Zn supplementation and include other TE evaluation as part of the nutritional assessment of PEG candidates.
Resumo:
Several phenomena present in electrical systems motivated the development of comprehensive models based on the theory of fractional calculus (FC). Bearing these ideas in mind, in this work are applied the FC concepts to define, and to evaluate, the electrical potential of fractional order, based in a genetic algorithm optimization scheme. The feasibility and the convergence of the proposed method are evaluated.
Resumo:
In this paper, a mixed-integer quadratic programming approach is proposed for the short-term hydro scheduling problem, considering head-dependency, discontinuous operating regions and discharge ramping constraints. As new contributions to earlier studies, market uncertainty is introduced in the model via price scenarios, and risk aversion is also incorporated by limiting the volatility of the expected profit through the conditional value-at-risk. Our approach has been applied successfully to solve a case Study based on one of the main Portuguese cascaded hydro systems, requiring a negligible computational time.
Resumo:
In this paper, a novel hybrid approach is proposed for electricity prices forecasting in a competitive market, considering a time horizon of 1 week. The proposed approach is based on the combination of particle swarm optimization and adaptive-network based fuzzy inference system. Results from a case study based on the electricity market of mainland Spain are presented. A thorough comparison is carried out, taking into account the results of previous publications, to demonstrate its effectiveness regarding forecasting accuracy and computation time. Finally, conclusions are duly drawn.
Resumo:
This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP). The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm. The heuristic generates parameterized active schedules. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
This article aims to contribute to the discussion of long-term dependence, focusing on the behavior of the main Belgian stock index. Non-parametric analyzes of the general characteristics of temporal frequency show that daily returns are non-ergodic and non-stationary. Therefore, we use the rescaled-range analysis (R/S) and the detrended fluctuation analysis (DFA), under the fractional Brownian motion approach, and we found slight evidence of long-term dependence. These results refute the random walk hypothesis with i.i.d. increments, which is the basis of the EMH in its weak form, and call into question some theoretical modeling of asset pricing. Other more localized complementary study, to identify the evolution of the degree of dependence over time windows, showed that the index has become less persistent from 2010. This may mean a maturing market by the extension of the effects of current financial crisis.
Resumo:
Prepared for presentation at the Portuguese Finance Network International Conference 2014, Vilamoura, Portugal, June 18-20
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
The computations performed by the brain ultimately rely on the functional connectivity between neurons embedded in complex networks. It is well known that the neuronal connections, the synapses, are plastic, i.e. the contribution of each presynaptic neuron to the firing of a postsynaptic neuron can be independently adjusted. The modulation of effective synaptic strength can occur on time scales that range from tens or hundreds of milliseconds, to tens of minutes or hours, to days, and may involve pre- and/or post-synaptic modifications. The collection of these mechanisms is generally believed to underlie learning and memory and, hence, it is fundamental to understand their consequences in the behavior of neurons.(...)
Resumo:
In the current context of serious climate changes, where the increase of the frequency of some extreme events occurrence can enhance the rate of periods prone to high intensity forest fires, the National Forest Authority often implements, in several Portuguese forest areas, a regular set of measures in order to control the amount of fuel mass availability (PNDFCI, 2008). In the present work we’ll present a preliminary analysis concerning the assessment of the consequences given by the implementation of prescribed fire measures to control the amount of fuel mass in soil recovery, in particular in terms of its water retention capacity, its organic matter content, pH and content of iron. This work is included in a larger study (Meira-Castro, 2009(a); Meira-Castro, 2009(b)). According to the established praxis on the data collection, embodied in multidimensional matrices of n columns (variables in analysis) by p lines (sampled areas at different depths), and also considering the quantitative data nature present in this study, we’ve chosen a methodological approach that considers the multivariate statistical analysis, in particular, the Principal Component Analysis (PCA ) (Góis, 2004). The experiments were carried out in a soil cover over a natural site of Andaluzitic schist, in Gramelas, Caminha, NW Portugal, who was able to maintain itself intact from prescribed burnings from four years and was submit to prescribed fire in March 2008. The soils samples were collected from five different plots at six different time periods. The methodological option that was adopted have allowed us to identify the most relevant relational structures inside the n variables, the p samples and in two sets at the same time (Garcia-Pereira, 1990). Consequently, and in addition to the traditional outputs produced from the PCA, we have analyzed the influence of both sampling depths and geomorphological environments in the behavior of all variables involved.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.