904 resultados para 291605 Processor Architectures


Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is an increasing interest in integrating Java-based, and in particular Jini systems, with the emerging Grid infrastructures. In this paper we explore various ways of integrating the key components of each architecture, their directory and information management services. In the first part of the paper we sketch out the Jini and Grid architectures and their services. We then review the components and services that Jini provides and compare these with those of the Grid. In the second part of the paper we critically explore four ways that Jini and the Grid could interact, here in particular we look at possible scenarios that can provide a seamless interface to a Jini environment for Grid clients and how to use Jini services from a Grid environment. In the final part of the paper we summarise our findings and report on future work being undertaken to integrate Jini and the Grid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Body area networks (BANs) are emerging as enabling technology for many human-centered application domains such as health-care, sport, fitness, wellness, ergonomics, emergency, safety, security, and sociality. A BAN, which basically consists of wireless wearable sensor nodes usually coordinated by a static or mobile device, is mainly exploited to monitor single assisted livings. Data generated by a BAN can be processed in real-time by the BAN coordinator and/or transmitted to a server-side for online/offline processing and long-term storing. A network of BANs worn by a community of people produces large amount of contextual data that require a scalable and efficient approach for elaboration and storage. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of body sensor data streams. In this paper, we motivate the introduction of Cloud-assisted BANs along with the main challenges that need to be addressed for their development and management. The current state-of-the-art is overviewed and framed according to the main requirements for effective Cloud-assisted BAN architectures. Finally, relevant open research issues in terms of efficiency, scalability, security, interoperability, prototyping, dynamic deployment and management, are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 2008-2009 financial crisis and related organizational and economic failures have meant that financial organizations are faced with a ‘tsunami’ of new regulatory obligations. This environment provides new managerial challenges as organizations are forced to engage in complex and costly remediation projects with short deadlines. Drawing from a longitudinal study conducted with nine financial institutions over twelve years, this paper identifies nine IS capabilities which underpin activities for managing regulatory themed governance, risk and compliance efforts. The research shows that many firms are now focused on meeting the Regulators’ deadlines at the expense of developing a strategic, enterprise-wide connected approach to compliance. Consequently, executives are in danger of implementing siloed compliance solutions within business functions. By evaluating the maturity of their IS capabilities which underpin regulatory adherence, managers have an opportunity to develop robust operational architectures and so are better positioned to face the challenges derived from shifting regulatory landscapes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A virtual system that emulates an ARM-based processor machine has been created to replace a traditional hardware-based system for teaching assembly language. The proposed virtual system integrates, in a single environment, all the development tools necessary to deliver introductory or advanced courses on modern assembly language programming. The virtual system runs a Linux operating system in either a graphical or console mode on a Windows or Linux host machine. No software licenses or extra hardware are required to use the virtual system, thus students are free to carry their own ARM emulator with them on a USB memory stick. Institutions adopting this, or a similar virtual system, can also benefit by reducing capital investment in hardware-based development kits and enable distance learning courses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The genome of the soil-dwelling heterotrophic N2-fixing Gram-negative bacterium Azotobacter chroococcum NCIMB 8003 (ATCC 4412) (Ac-8003) has been determined. It consists of 7 circular replicons totalling 5,192,291 bp comprising a circular chromosome of 4,591,803 bp and six plasmids pAcX50a, b, c, d, e, f of 10,435 bp, 13,852, 62,783, 69,713, 132,724, and 311,724 bp respectively. The chromosome has a G+C content of 66.27% and the six plasmids have G+C contents of 58.1, 55.3, 56.7, 59.2, 61.9, and 62.6% respectively. The methylome has also been determined and 5 methylation motifs have been identified. The genome also contains a very high number of transposase/inactivated transposase genes from at least 12 of the 17 recognised insertion sequence families. The Ac-8003 genome has been compared with that of Azotobacter vinelandii ATCC BAA-1303 (Av-DJ), a derivative of strain O, the only other member of the Azotobacteraceae determined so far which has a single chromosome of 5,365,318 bp and no plasmids. The chromosomes show significant stretches of synteny throughout but also reveal a history of many deletion/insertion events. The Ac-8003 genome encodes 4628 predicted protein-encoding genes of which 568 (12.2%) are plasmid borne. 3048 (65%) of these show > 85% identity to the 5050 protein-encoding genes identified in Av-DJ, and of these 99 are plasmid-borne. The core biosynthetic and metabolic pathways and macromolecular architectures and machineries of these organisms appear largely conserved including genes for CO-dehydrogenase, formate dehydrogenase and a soluble NiFe-hydrogenase. The genetic bases for many of the detailed phenotypic differences reported for these organisms have also been identified. Also many other potential phenotypic differences have been uncovered. Properties endowed by the plasmids are described including the presence of an entire aerobic corrin synthesis pathway in pAcX50f and the presence of genes for retro-conjugation in pAcX50c. All these findings are related to the potentially different environmental niches from which these organisms were isolated and to emerging theories about how microbes contribute to their communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large body of psycholinguistic research has revealed that during sentence interpretation adults coordinate multiple sources of information. Particularly, they draw both on linguistic properties of the message and on information from the context to constrain their interpretations. Relatively little however is known about how this integrative processor develops through language acquisition and about how children process language. In this study, two on-line picture verification tasks were used to examine how 1st, 2nd and 4th/5th grade monolingual Greek children resolve pronoun ambiguities during sentence interpretation and how their performance compares to that of adults on the same tasks. Specifically, we manipulated the type of subject pronoun, i.e. null or overt, and examined how this affected participants’ preferences for competing antecedents, i.e. in the subject or object position. The results revealed both similarities and differences in how adults and the various child groups comprehended ambiguous pronominal forms. Particularly, although adults and children alike showed sensitivity to the distribution of overt and null subject pronouns, this did not always lead to convergent interpretation preferences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) have been an exciting topic in recent years. The services offered by a WSN can be classified into three major categories: monitoring, alerting, and information on demand. WSNs have been used for a variety of applications related to the environment (agriculture, water and forest fire detection), the military, buildings, health (elderly people and home monitoring), disaster relief, and area or industrial monitoring. In most WSNs tasks like processing the sensed data, making decisions and generating emergency messages are carried out by a remote server, hence the need for efficient means of transferring data across the network. Because of the range of applications and types of WSN there is a need for different kinds of MAC and routing protocols in order to guarantee delivery of data from the source nodes to the server (or sink). In order to minimize energy consumption and increase performance in areas such as reliability of data delivery, extensive research has been conducted and documented in the literature on designing energy efficient protocols for each individual layer. The most common way to conserve energy in WSNs involves using the MAC layer to put the transceiver and the processor of the sensor node into a low power, sleep state when they are not being used. Hence the energy wasted due to collisions, overhearing and idle listening is reduced. As a result of this strategy for saving energy, the routing protocols need new solutions that take into account the sleep state of some nodes, and which also enable the lifetime of the entire network to be increased by distributing energy usage between nodes over time. This could mean that a combined MAC and routing protocol could significantly improve WSNs because the interaction between the MAC and network layers lets nodes be active at the same time in order to deal with data transmission. In the research presented in this thesis, a cross-layer protocol based on MAC and routing protocols was designed in order to improve the capability of WSNs for a range of different applications. Simulation results, based on a range of realistic scenarios, show that these new protocols improve WSNs by reducing their energy consumption as well as enabling them to support mobile nodes, where necessary. A number of conference and journal papers have been published to disseminate these results for a range of applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reusable and evolvable Software Engineering Environments (SEES) are essential to software production and have increasingly become a need. In another perspective, software architectures and reference architectures have played a significant role in determining the success of software systems. In this paper we present a reference architecture for SEEs, named RefASSET, which is based on concepts coming from the aspect-oriented approach. This architecture is specialized to the software testing domain and the development of tools for that domain is discussed. This and other case studies have pointed out that the use of aspects in RefASSET provides a better Separation of Concerns, resulting in reusable and evolvable SEEs. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a parallel hardware architecture for image feature detection based on the Scale Invariant Feature Transform algorithm and applied to the Simultaneous Localization And Mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320 x 240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations oil performance, area and accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an automatic method to detect and classify weathered aggregates by assessing changes of colors and textures. The method allows the extraction of aggregate features from images and the automatic classification of them based on surface characteristics. The concept of entropy is used to extract features from digital images. An analysis of the use of this concept is presented and two classification approaches, based on neural networks architectures, are proposed. The classification performance of the proposed approaches is compared to the results obtained by other algorithms (commonly considered for classification purposes). The obtained results confirm that the presented method strongly supports the detection of weathered aggregates.