990 resultados para evolution algorithm
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
Applications refactorings that imply the schema evolution are common activities in programming practices. Although modern object-oriented databases provide transparent schema evolution mechanisms, those refactorings continue to be time consuming tasks for programmers. In this paper we address this problem with a novel approach based on aspect-oriented programming and orthogonal persistence paradigms, as well as our meta-model. An overview of our framework is presented. This framework, a prototype based on that approach, provides applications with aspects of persistence and database evolution. It also provides a new pointcut/advice language that enables the modularization of the instance adaptation crosscutting concern of classes, which were subject to a schema evolution. We also present an application that relies on our framework. This application was developed without any concern regarding persistence and database evolution. However, its data is recovered in each execution, as well as objects, in previous schema versions, remain available, transparently, by means of our framework.
Resumo:
Generally, the evolution process of applications has impact on their underlining data models, thus becoming a time-consuming problem for programmers and database administrators. In this paper we address this problem within an aspect-oriented approach, which is based on a meta-model for orthogonal persistent programming systems. Applying reflection techniques, our meta-model aims to be simpler than its competitors. Furthermore, it enables database multi-version schemas. We also discuss two case studies in order to demonstrate the advantages of our approach.
Resumo:
We have evaluated the sensitivity of the classical blood subinoculation method, modified through cyclophosphamide treatment of transferred mice, for the detection of occult parasitaemias in Trypanosoma cruzi chronically infected mice. Besides its simplicity, the method was shown to be highly sensitive for both the "chronic" phase parasites (99% of chronic cases were shown to harbour occult parasitaemias) and for the acute phase parasites (T. cruzi could be detected in 53.8% of animals transferred with one Y strain parasite and in 20% of animals transferred with one CL strain parasite). Using acute phase bloodforms, the assay proved to be more sensitive than conventional subinoculation when dealing with the CL, but not the Y strain of the parasite. With the help of this parasite detection tool, we have studied during a one year period, the evolution of subpatent parasitaemias in a group of mice which survived through chemotherapy from lethal acute phase of T. cruzi infection. Cyclophosphamide transfer assay revealed occult parasitaemias in 100% of the chronic animals, nevertheless, continuous and discontinuous patterns of positivity were observed.
Resumo:
The extensional process affecting Iberia during the Triassic and Jurassic times change from the end of the Cretaceous and, throughout the Palaeocene, the displacement between the African and European plates was clearly convergent and part of the future Internal Zone of the Betic Cordillera was affected. To the west, the Atlantic continued to open as a passive margin and, to the north, no significant deformation occurred. During the Eocene, the entire Iberian plate was subjected to compression. which caused major deformations in the Pyrenees and also in the Alpujarride and Nevado-Filabride, Internal Betic, complexes. In the Oligocene continued this situation, but in addition, the new extensional process ocurring in the western Mediterranean area, together with the constant eastward drift of Iberia due to Atlantic opening, compressed the eastern sector of Iberia, giving rise to the structuring of the Iberian Cordillera. The Neogene was the time when the Betic Cordillera reached its fundamental features with the westward displacement of the Betic-Rif Internal Zone, expelled by the progressive opening of the Algerian Basin, opening prolonged till the Alboran Sea. From the late Miocene onwards, all Iberia was affected by a N-S to NNW-SSE compression, combined in many points by a near perpendicular extension. Specially in eastern and southern Iberia a radial extension superposed these compression and extension.
Resumo:
The Setúbal and São Vicente canyons are two major modern submarine canyons located in the southwest Iberian margin of Portugal. Although recognised as Pliocene to Quaternary features, their development during the Tertiary has not been fully understood up to date. A grid of 2D seismic data has been used to characterise the sedimentary deposits of the adjacent flanks to the submarine canyons. The relationship between the geological structure of the margin and the canyon's present location has been investigated. The interpretation of the main seismic units allowed the recognition of three generations of ravinements probably originated after middle Oligocene. Six units grouped in two distinctive seismic sequences have been identified and correlated with offshore stratigraphic data. Seismic Sequence 2 (SS2), the oldest, overlies Mesozoic and upper Eocene deformed units. Seismic Sequence I (SS1) is composed of four different seismic packages separated from SS2 by an erosional surface. The base of the studied sediment ridges is marked by an extensive erosional surface derived from a early/middle Oligocene relative sea-level fall. Deposition in the adjacent area to the actual canyons was reinitiated in late Oligocene in the form of transgressive and channel-fill deposits. A new depositional hiatus is recorded onshore during the Burdigalian, coincident with the unconformity separating SS1 and SS2. This can be correlated with the Arrábida unconformity and with the paroxysmal Burdigalian phase of the Betic domain. Presently, the Setúbal and São Vicente submarine canyons locally cut SS1 and SS2, forming distinctive channels from those recognised on the seismic data. On the upper shelf both dissect highly deformed areas subject to important erosion.
Resumo:
The chemical features of the ground water in the Lower Tagus Cenozoic deposits are strongly influenced by lithology, by the velocity and direction of the water movement as well as by the localization of the recharge and discharge zones. The mineralization varies between 80 and 900 mg/l. It is minimal in the recharge zones and in the Pliocene sand and maximum in the Miocene carbonated and along the alluvial valley. Mineralization always reflects the time of permanence, the temperature and the pressure. The natural process of water mineralization is disturbed in agricultural areas because the saline concentration of the infiltration water exceeds that of the infiltrated rainwater. In the discharge zones, the rise of the more mineralized, some times thermal deep waters related to tectonic accidents give rise to anomalies in the distribution of the aquiferous system mineralization model. The diversity of the hydrochemical facies of the ground water may be related to several factors whose identification is some times difficult.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
Climatic reconstructions based on palynological data from Aquitaine outcrops emphasize an important degradation phase during the Lower Serravallian. Climatic and environmental changes can be related to sea-level variations (Bur 5 / Lan 1, Lan 2 / Ser 1 and Ser 2 cycles). Transgressive phases feature warmer conditions and more open environments whereas regressive phases are marked by a cooler climate and an extent of the forest cover. From Langhian to Middle Serravallian, a general cooling is highlighted, with disappearance of most megathermic taxa and a transition from warm and dry climate to warm-temperate and much more humid conditions. Conclusions are consistent with studies on bordering areas and place the major degradation phase around 14 My. The palynologic data allow filling a gap in the climatic evolution of Southern France, as a connection between Lower and Upper Miocene, both well recorded. These results document, on Western Europe scale, latitudinal climatic gradient across Northern hemisphere while featuring a transition between Mediterranean area and northeastern Atlantic frontage.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
Extended-spectrum β-lactamases (ESBLs) prevalence was studied in the north of Portugal, among 193 clinical isolates belonging to citizens in a district in the boundaries between this country and Spain from a total of 7529 clinical strains. In the present study we recovered some members of Enterobacteriaceae family, producing ESBL enzymes, including Escherichia coli (67.9%), Klebsiella pneumoniae (30.6%), Klebsiella oxytoca (0.5%), Enterobacter aerogenes (0.5%), and Citrobacter freundii (0.5%). β-lactamases genes blaTEM, blaSHV, and blaCTX-M were screened by polymerase chain reaction (PCR) and sequencing approaches. TEM enzymes were among the most prevalent types (40.9%) followed by CTX-M (37.3%) and SHV (23.3%). Among our sample of 193 ESBL-producing strains 99.0% were resistant to the fourth-generation cephalosporin cefepime. Of the 193 isolates 81.3% presented transferable plasmids harboring genes. Clonal studies were performed by PCR for the enterobacterial repetitive intragenic consensus (ERIC) sequences. This study reports a high diversity of genetic patterns. Ten clusters were found for E. coli isolates and five clusters for K. pneumoniae strains by means of ERIC analysis. In conclusion, in this country, the most prevalent type is still the TEM-type, but CTX-M is growing rapidly.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e Computadores
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
Dissertação de Natureza Científica elaborada no Laboratório Nacional de Engenharia Civil (LNEC) para obtenção do grau de mestre em Engenharia Civil na Área de Especialização de Hidráulica no âmbito do protocolo de cooperação entre o ISEL e o LNEC