941 resultados para Parallel building blocks


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial and temporal fluctuations in the concentration field from an ensemble of continuous point-source releases in a regular building array are analyzed from data generated by direct numerical simulations. The release is of a passive scalar under conditions of neutral stability. Results are related to the underlying flow structure by contrasting data for an imposed wind direction of 0 deg and 45 deg relative to the buildings. Furthermore, the effects of distance from the source and vicinity to the plume centreline on the spatial and temporal variability are documented. The general picture that emerges is that this particular geometry splits the flow domain into segments (e.g. “streets” and “intersections”) in each of which the air is, to a first approximation, well mixed. Notable exceptions to this general rule include regions close to the source, near the plume edge, and in unobstructed channels when the flow is aligned. In the oblique (45 deg) case the strongly three-dimensional nature of the flow enhances mixing of a scalar within the canopy leading to reduced temporal and spatial concentration fluctuations within the plume core. These fluctuations are in general larger for the parallel flow (0 deg) case, especially so in the long unobstructed channels. Due to the more complex flow structure in the canyon-type streets behind buildings, fluctuations are lower than in the open channels, though still substantially larger than for oblique flow. These results are relevant to the formulation of simple models for dispersion in urban areas and to the quantification of the uncertainties in their predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Commonly used in archaeological contexts, micromorphology did not see a parallel advance in the field of experimental archaeology. Drawing from early work conducted in the 1990`s on ethnohistoric sites in the Beagle Channel, we analyze a set of 25 thin sections taken from control features and experimental tests. The control features include animal pathways and environmental contexts (beach samples, forest litter, soils from the proximities of archaeological sites), while the experimental samples comprise anthropic structures, such as hearths, and valves of Mytilus edulis (the most important component of shell middens in the region) heated from 200 degrees C to 800 degrees C. Their micromorphological study constitutes a modern analogue to assist archaeologists studying site formation and ethnographical settings in cold climates, with particular emphasis on shell midden contexts. (c) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies have shown that the optical properties of building exterior surfaces are important in terms of energy use and thermal comfort. While the majority of the studies are related to exterior surfaces, the radiation properties of interior surfaces are less thoroughly investigated. Development in the coil-coating industries has now made it possible to allocate different optical properties for both exterior and interior surfaces of steel-clad buildings. The aim of this thesis is to investigate the influence of surface radiation properties with the focus on the thermal emittance of the interior surfaces, the modeling approaches and their consequences in the context of the building energy performance and indoor thermal environment. The study consists of both numerical and experimental investigations. The experimental investigations include parallel field measurements on three similar test cabins with different interior and exterior surface radiation properties in Borlänge, Sweden, and two ice rink arenas with normal and low emissive ceiling in Luleå, Sweden. The numerical methods include comparative simulations by the use of dynamic heat flux models, Building Energy Simulation (BES), Computational Fluid Dynamics (CFD) and a coupled model for BES and CFD. Several parametric studies and thermal performance analyses were carried out in combination with the different numerical methods. The parallel field measurements on the test cabins include the air, surface and radiation temperatures and energy use during passive and active (heating and cooling) measurements. Both measurement and comparative simulation results indicate an improvement in the indoor thermal environment when the interior surfaces have low emittance. In the ice rink arenas, surface and radiation temperature measurements indicate a considerable reduction in the ceiling-to-ice radiation by the use of low emittance surfaces, in agreement with a ceiling-toice radiation model using schematic dynamic heat flux calculations. The measurements in the test cabins indicate that the use of low emittance surfaces can increase the vertical indoor air temperature gradients depending on the time of day and outdoor conditions. This is in agreement with the transient CFD simulations having the boundary condition assigned on the exterior surfaces. The sensitivity analyses have been performed under different outdoor conditions and surface thermal radiation properties. The spatially resolved simulations indicate an increase in the air and surface temperature gradients by the use of low emittance coatings. This can allow for lower air temperature at the occupied zone during the summer. The combined effect of interior and exterior reflective coatings in terms of energy use has been investigated by the use of building energy simulation for different climates and internal heat loads. The results indicate possible energy savings by the smart choice of optical properties on interior and exterior surfaces of the building. Overall, it is concluded that the interior reflective coatings can contribute to building energy savings and improvement of the indoor thermal environment. This can be numerically investigated by the choice of appropriate models with respect to the level of detail and computational load. This thesis includes comparative simulations at different levels of detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common characteristic among parallel/distributed programming languages is that the one language is used to specify not only the overall organisation of the distributed application, but also the functionality of the application. That is, the connectivity and functionality of processes are specified within a single program. Connectivity and functionality are independent aspects of a distributed application. This thesis shows that these two aspects can be specified separately, therefore allowing application designers to freely concentrate on either aspect in a modular fashion. Two new programming languages have been developed for specifying each aspect. These languages are for loosely coupled distributed applications based on message passing, and have been designed to simplify distributed programming by completely removing all low level interprocess communication. A suite of languages and tools has been designed and developed. It includes the two new languages, parsers, a compilation system to generate intermediate C code that is compiled to binary object modules, a run-time system to create, manage and terminate several distributed applications, and a shell to communicate with the run-tune system. DAL (Distributed Application Language) and DAPL (Distributed Application Process Language) are the new programming languages for the specification and development of process oriented, asynchronous message passing, distributed applications. These two languages have been designed and developed as part of this doctorate in order to specify such distributed applications that execute on a cluster of computers. Both languages are used to specify orthogonal components of an application, on the one hand the organisation of processes that constitute an application, and on the other the interface and functionality of each process. Consequently, these components can be created in a modular fashion, individually and concurrently. The DAL language is used to specify not only the connectivity of all processes within an application, but also a cluster of computers for which the application executes. Furthermore, sub-clusters can be specified for individual processes of an application to constrain a process to a particular group of computers. The second language, DAPL, is used to specify the interface, functionality and data structures of application processes. In addition to these languages, a DAL parser, a DAPL parser, and a compilation system have been designed and developed (in this project). This compilation system takes DAL and DAPL programs to generate object modules based on machine code, one module for each application process. These object modules are used by the Distributed Application System (DAS) to instantiate and manage distributed applications. The DAS system is another new component of this project. The purpose of the DAS system is to create, manage, and terminate many distributed applications of similar and different configurations. The creation procedure incorporates the automatic allocation of processes to remote machines. Application management includes several operations such as deletion, addition, replacement, and movement of processes, and also detection and reaction to faults such as a processor crash. A DAS operator communicates with the DAS system via a textual shell called DASH (Distributed Application SHell). This suite of languages and tools allowed distributed applications of varying connectivity and functionality to be specified quickly and simply at a high level of abstraction. DAL and DAPL programs of several processes may require a few dozen lines to specify as compared to several hundred lines of equivalent C code that is generated by the compilation system. Furthermore, the DAL and DAPL compilation system is successful at generating binary object modules, and the DAS system succeeds in instantiating and managing several distributed applications on a cluster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natal dispersal is an important life history trait driving variation in individual fitness, and therefore, a proper understanding of the factors underlying dispersal behaviour is critical to many fields including population dynamics, behavioural ecology and conservation biology. However, individual dispersal patterns remain difficult to quantify despite many years of research using direct and indirect methods. Here, we quantify dispersal in a single intensively studied population of the cooperatively breeding chestnut-crowned babbler (Pomatostomus ruficeps) using genetic networks created from the combination of pairwise relatedness data and social networking methods and compare this to dispersal estimates from re-sighting data. This novel approach not only identifies movements between social groups within our study sites but also provides an estimation of immigration rates of individuals originating outside the study site. Both genetic and re-sighting data indicated that dispersal was strongly female biased, but the magnitude of dispersal estimates was much greater using genetic data. This suggests that many previous studies relying on mark–recapture data may have significantly underestimated dispersal. An analysis of spatial genetic structure within the sampled population also supports the idea that females are more dispersive, with females having no structure beyond the bounds of their own social group, while male genetic structure expands for 750 m from their social group. Although the genetic network approach we have used is an excellent tool for visualizing the social and genetic microstructure of social animals and identifying dispersers, our results also indicate the importance of applying them in parallel with behavioural and life history data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimal source precoding matrix and relay amplifying matrix have been developed in recent works on multiple-input multiple-output (MIMO) relay communication systems assuming that the instantaneous channel state information (CSI) is available. However, in practical relay communication systems, the instantaneous CSI is unknown, and therefore, has to be estimated at the destination node. In this paper, we develop a novel channel estimation algorithm for two-hop MIMO relay systems using the parallel factor (PARAFAC) analysis. The proposed algorithm provides the destination node with full knowledge of all channel matrices involved in the communication. Compared with existing approaches, the proposed algorithm requires less number of training data blocks, yields smaller channel estimation error, and is applicable for both one-way and two-way MIMO relay systems with single or multiple relay nodes. Numerical examples demonstrate the effectiveness of the PARAFAC-based channel estimation algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many atypical antipsychotics show antagonism at both serotonergic and dopaminergic neurones and show fewer extrapyramidal side effects (EPS). Nefazodone blocks postsynaptic 5HT2A receptors and weakly inhibits serotonin reuptake. This study aimed to elucidate the role of nefazodone in the treatment of antipsychotic-induced EPS. The trial was a double-blind, randomised, placebo-controlled trial of patients requiring antipsychotic treatment with haloperidol 10mg daily; from which a subgroup of patients who developed EPS were selected for the study. Patients were randomised to add-on therapy with either placebo (n=24) or nefazodone (n=25) 100mg bd. EPS were measured on days 0, 3 and 7 using the Simpson Angus, Barnes akathisia, abnormal involuntary movement and Chouinard scales. Nefazodone significantly reduced EPS as measured by both the Simpson Angus scale and CGI (p=0.007 and 0.0247, respectively). Akathisia and tardive dyskinesia did not differ between the two groups (p=0.601; p=0.507, respectively). These results suggest the role of 5HT2 antagonism in the mechanism of action of atypical antipsychotics with respect to lowering rates of drug-induced EPS. In addition, a therapeutic role for nefazodone is suggested in the treatment of antipsychotic-induced EPS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Massive, raw concrete structures – the likes of the Telecommunications Building (1972–81) by Janko Konstantinov; the campus of Ss. Cyril and Methodius University (1974) by Marko Musˇicˇ; the National Hydraulic Institute (1972) by Krsto Todorovski; and the Bank Complex (1970) by R. Lalovik and O. Papesˇ – have led to the production of an enduring monumental presence and helped inspire Skopje’s title as the “Brutalist capital of the world”. These works followed Kenzo Tange’s introduction of Japanese Metabolism to Skopje through his role in the 1965 United Nations sponsored reconstruction competition. The unique position of a Non-Aligned Yugoslavia staged and facilitated architectural and professional exchange during the Cold War. Each trajectory and manifestation illustrates the complex picture of international architectural exchange and local production. Skopje and its numerous Brutalist edifices is an elucidative story, because it represents a meeting point between Brutalism, Metabolism and its American parallel. This article discusses, in particular, the Skopje Archive Building (1966) and the “Goce Delcˇev” Student Dormitory (1969) – two buildings designed by the architect Georgi Konstantinovski, realised on his return from a Masters program at Yale University and employment within I. M. Pei’s New York office. Their architecture illustrates the simultaneous preoccupations of leading architects at the time in regaining a conceptual ground made explicit through a complete and apprehensible image. From this particular position, the article explores the question of ethics and aesthetics central to Banham’s outline of the “New Brutalism”.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How do presidents win legislative support under conditions of extreme multipartism? Comparative presidential research has offered two parallel answers, one relying on distributive politics and the other claiming that legislative success is a function of coalition formation. We merge these insights in an integrated approach to executive-legislative relations, also adding contextual factors related to dynamism and bargaining conditions. We find that the two presidential “tools” – pork and coalition goods – are substitutable resources, with pork functioning as a fine-tuning instrument that interacts reciprocally with legislative support. Pork expenditures also depend upon a president’s bargaining leverage and the distribution of legislative seats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Civil - FEIS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following the internationalization of contemporary higher education, academic institutions based in non-English speaking countries are increasingly urged to produce contents in English to address international prospective students and personnel, as well as to increase their attractiveness. The demand for English translations in the institutional academic domain is consequently increasing at a rate exceeding the capacity of the translation profession. Resources for assisting non-native authors and translators in the production of appropriate texts in L2 are therefore required in order to help academic institutions and professionals streamline their translation workload. Some of these resources include: (i) parallel corpora to train machine translation systems and multilingual authoring tools; and (ii) translation memories for computer-aided tools. The purpose of this study is to create and evaluate reference resources like the ones mentioned in (i) and (ii) through the automatic sentence alignment of a large set of Italian and English as a Lingua Franca (ELF) institutional academic texts given as equivalent but not necessarily parallel (i.e. translated). In this framework, a set of aligning algorithms and alignment tools is examined in order to identify the most profitable one(s) in terms of accuracy and time- and cost-effectiveness. In order to determine the text pairs to align, a sample is selected according to document length similarity (characters) and subsequently evaluated in terms of extent of noisiness/parallelism, alignment accuracy and content leverageability. The results of these analyses serve as the basis for the creation of an aligned bilingual corpus of academic course descriptions, which is eventually used to create a translation memory in TMX format.