872 resultados para High-Performance Computing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

La computación con membranas surge como una alternativa a la computación tradicional. Dentro de este campo se sitúan los denominados Sistemas P de Transición que se basan en la existencia de regiones que contienen recursos y reglas que hacen evolucionar a dichos recursos para poder llevar a cada una de las regiones a una nueva situación denominada configuración. La sucesión de las diferentes configuraciones conforman la computación. En este campo, el Grupo de Computación Natural de la Universidad Politécnica de Madrid lleva a cabo numerosas investigaciones al amparo de las cuales se han publicado numerosos artículos y realizado varias tesis doctorales. Las principales vías de investigación han sido, hasta el momento, el estudio del modelo teórico sobre el que se definen los Sistemas P, el estudio de los algoritmos que se utilizan para la aplicación de las reglas de evolución en las regiones, el diseño de nuevas arquitecturas que mejoren las comunicaciones entre las diferentes membranas (regiones) que componen el sistema y la implantación de estos sistemas en dispositivos hardware que pudiesen definir futuras máquinas basadas en este modelo. Dentro de este último campo, es decir, dentro del objetivo de construir finalmente máquinas que puedan llevar a cabo la funcionalidad de la computación con Sistemas P, la presente tesis doctoral se centra en el diseño de dos procesadores paralelos que, aplicando variantes de algoritmos existentes, favorezcan el crecimiento en el nivel de intra-paralelismo a la hora de aplicar las reglas. El diseño y creación de ambos procesadores presentan novedosas aportaciones al entorno de investigación de los Sistemas P de Transición en tanto en cuanto se utilizan conceptos que aunque previamente definidos de manera teórica, no habían sido introducidos en el hardware diseñado para estos sistemas. Así, los dos procesadores mantienen las siguientes características: - Presentan un alto rendimiento en la fase de aplicación de reglas, manteniendo por otro lado una flexibilidad y escalabilidad medias que son dependientes de la tecnología final sobre la que se sinteticen dichos procesadores. - Presentan un alto nivel de intra-paralelismo en las regiones al permitir la aplicación simultánea de reglas. - Tienen carácter universal en tanto en cuanto no depende del carácter de las reglas que componen el Sistema P. - Tienen un comportamiento indeterminista que es inherente a la propia naturaleza de estos sistemas. El primero de los circuitos utiliza el conjunto potencia del conjunto de reglas de aplicación así como el concepto de máxima aplicabilidad para favorecer el intra-paralelismo y el segundo incluye, además, el concepto de dominio de aplicabilidad para determinar el conjunto de reglas que son aplicables en cada momento con los recursos existentes. Ambos procesadores se diseñan y se prueban mediante herramientas de diseño electrónico y se preparan para ser sintetizados sobre FPGAs. ABSTRACT Membrane computing appears as an alternative to traditional computing. P Systems are placed inside this field and they are based upon the existence of regions called “membranes” that contain resources and rules that describe how the resources may vary to take each of these regions to a new situation called "configuration". Successive configurations conform computation. Inside this field, the Natural Computing Group of the Universidad Politécnica of Madrid develops a large number of works and researches that provide a lot of papers and some doctoral theses. Main research lines have been, by the moment, the study of the theoretical model over which Transition P Systems are defined, the study of the algorithms that are used for the evolution rules application in the regions, the design of new architectures that may improve communication among the different membranes (regions) that compose the whole system and the implementation of such systems over hardware devices that may define machines based upon this new model. Within this last research field, this is, within the objective of finally building machines that may accomplish the functionality of computation with P Systems, the present thesis is centered on the design of two parallel processors that, applying several variants of some known algorithms, improve the level of the internal parallelism at the evolution rule application phase. Design and creation of both processors present innovations to the field of Transition P Systems research because they use concepts that, even being known before, were never used for circuits that implement the applying phase of evolution rules. So, both processors present the following characteristics: - They present a very high performance during the application rule phase, keeping, on the other hand, a level of flexibility and scalability that, even known it is not very high, it seems to be acceptable. - They present a very high level of internal parallelism inside the regions, allowing several rule to be applied at the same time. - They present a universal character meaning this that they are not dependent upon the active rules that compose the P System. - They have a non-deterministic behavior that is inherent to this systems nature. The first processor uses the concept of "power set of the application rule set" and the concept of "maximal application" number to improve parallelism, and the second one includes, besides the previous ones, the concept of "applicability domain" to determine the set of rules that may be applied in each moment with the existing resources.. Both processors are designed and tested with the design software by Altera Corporation and they are ready to be synthetized over FPGAs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A first-rate e-Health system saves lives, provides better patient care, allows complex but useful epidemiologic analysis and saves money. However, there may also be concerns about the costs and complexities associated with e-health implementation, and the need to solve issues about the energy footprint of the high-demanding computing facilities. This paper proposes a novel and evolved computing paradigm that: (i) provides the required computing and sensing resources; (ii) allows the population-wide diffusion; (iii) exploits the storage, communication and computing services provided by the Cloud; (iv) tackles the energy-optimization issue as a first-class requirement, taking it into account during the whole development cycle. The novel computing concept and the multi-layer top-down energy-optimization methodology obtain promising results in a realistic scenario for cardiovascular tracking and analysis, making the Home Assisted Living a reality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In today's internet world, web browsers are an integral part of our day-to-day activities. Therefore, web browser security is a serious concern for all of us. Browsers can be breached in different ways. Because of the over privileged access, extensions are responsible for many security issues. Browser vendors try to keep safe extensions in their official extension galleries. However, their security control measures are not always effective and adequate. The distribution of unsafe extensions through different social engineering techniques is also a very common practice. Therefore, before installation, users should thoroughly analyze the security of browser extensions. Extensions are not only available for desktop browsers, but many mobile browsers, for example, Firefox for Android and UC browser for Android, are also furnished with extension features. Mobile devices have various resource constraints in terms of computational capabilities, power, network bandwidth, etc. Hence, conventional extension security analysis techniques cannot be efficiently used by end users to examine mobile browser extension security issues. To overcome the inadequacies of the existing approaches, we propose CLOUBEX, a CLOUd-based security analysis framework for both desktop and mobile Browser EXtensions. This framework uses a client-server architecture model. In this framework, compute-intensive security analysis tasks are generally executed in a high-speed computing server hosted in a cloud environment. CLOUBEX is also enriched with a number of essential features, such as client-side analysis, requirements-driven analysis, high performance, and dynamic decision making. At present, the Firefox extension ecosystem is most susceptible to different security attacks. Hence, the framework is implemented for the security analysis of the Firefox desktop and Firefox for Android mobile browser extensions. A static taint analysis is used to identify malicious information flows in the Firefox extensions. In CLOUBEX, there are three analysis modes. A dynamic decision making algorithm assists us to select the best option based on some important parameters, such as the processing speed of a client device and network connection speed. Using the best analysis mode, performance and power consumption are improved significantly. In the future, this framework can be leveraged for the security analysis of other desktop and mobile browser extensions, too.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Historic records of α-dicarbonyls (glyoxal, methylglyoxal), carboxylic acids (C6–C12 dicarboxylic acids, pinic acid, p-hydroxybenzoic acid, phthalic acid, 4-methylphthalic acid), and ions (oxalate, formate, calcium) were determined with annual resolution in an ice core from Grenzgletscher in the southern Swiss Alps, covering the time period from 1942 to 1993. Chemical analysis of the organic compounds was conducted using ultra-high-performance liquid chromatography (UHPLC) coupled to electrospray ionization high-resolution mass spectrometry (ESI-HRMS) for dicarbonyls and long-chain carboxylic acids and ion chromatography for short-chain carboxylates. Long-term records of the carboxylic acids and dicarbonyls, as well as their source apportionment, are reported for western Europe. This is the first study comprising long-term trends of dicarbonyls and long-chain dicarboxylic acids (C6–C12) in Alpine precipitation. Source assignment of the organic species present in the ice core was performed using principal component analysis. Our results suggest biomass burning, anthropogenic emissions, and transport of mineral dust to be the main parameters influencing the concentration of organic compounds. Ice core records of several highly correlated compounds (e.g., p-hydroxybenzoic acid, pinic acid, pimelic, and suberic acids) can be related to the forest fire history in southern Switzerland. P-hydroxybenzoic acid was found to be the best organic fire tracer in the study area, revealing the highest correlation with the burned area from fires. Historical records of methylglyoxal, phthalic acid, and dicarboxylic acids adipic acid, sebacic acid, and dodecanedioic acid are comparable with that of anthropogenic emissions of volatile organic compounds (VOCs). The small organic acids, oxalic acid and formic acid, are both highly correlated with calcium, suggesting their records to be affected by changing mineral dust transport to the drilling site.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The influence of an organically modified clay on the curing behavior of three epoxy systems widely used in the aerospace industry and of different structures and functionalities, was studied. Diglycidyl ether of bisphenol A (DGEBA), triglycidyl p-amino phenol (TGAP) and tetraglycidyl diamino diphenylmethane (TGDDM) were mixed with an octadecyl ammonium ion modified organoclay and cured with diethyltoluene diamine (DETDA). The techniques of dynamic mechanical thermal analysis (DMTA), chemorheology and differential scanning calorimetry (DSC) were applied to investigate gelation and vitrification behavior, as well as catalytic effects of the clay on resin cure. While the formation of layered silicate nanocomposite based on the bifunctional DGEBA resin has been previously investigated to some extent, this paper represents the first detailed study of the cure behavior of different high performance, epoxy nanocomposite systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years many real time applications need to handle data streams. We consider the distributed environments in which remote data sources keep on collecting data from real world or from other data sources, and continuously push the data to a central stream processor. In these kinds of environments, significant communication is induced by the transmitting of rapid, high-volume and time-varying data streams. At the same time, the computing overhead at the central processor is also incurred. In this paper, we develop a novel filter approach, called DTFilter approach, for evaluating the windowed distinct queries in such a distributed system. DTFilter approach is based on the searching algorithm using a data structure of two height-balanced trees, and it avoids transmitting duplicate items in data streams, thus lots of network resources are saved. In addition, theoretical analysis of the time spent in performing the search, and of the amount of memory needed is provided. Extensive experiments also show that DTFilter approach owns high performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A low cost interrogation scheme is demonstrated for a refractometer based on an in-line fiber long period grating (LPG) Mach–Zehnder interferometer. Using this interrogation scheme the minimum detectable change in refractive index of ?n ~ 1.8×10-6 is obtained, which is the highest resolution achieved using a fiber LPG device, and is comparable to precision techniques used in the industry including high performance liquid chromatography and ultraviolet spectroscopy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thesis describes an investigation into methods for the design of flexible high-speed product processing machinery, consisting of independent electromechanically actuated machine functions which operate under software coordination and control. An analysis is made of the elements of traditionally designed cam-actuated, mechanically coupled machinery, so that the operational functions and principal performance limitations of the separate machine elements may be identified. These are then used to define the requirements for independent actuators machinery, with a discussion of how this type of design approach is more suited to modern manufacturing trends. A distributed machine controller topology is developed which is a hybrid of hierarchical and pipeline control. An analysis is made, with the aid of dynamic simulation modelling, which confirms the suitability of the controller for flexible machinery control. The simulations include complex models of multiple independent actuators systems, which enable product flow and failure analyses to be performed. An analysis is made of high performance brushless d.c. servomotors and their suitability for actuating machine motions is assessed. Procedures are developed for the selection of brushless servomotors for intermittent machine motions. An experimental rig is described which has enabled the actuation and control methods developed to be implemented. With reference to this, an evaluation is made of the suitability of the machine design method and a discussion is given of the developments which are necessary for operational independent actuators machinery to be attained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effect of organically modified clay on the morphology, rheology and mechanical properties of high-density polyethylene (HDPE) and polyamide 6 (PA6) blends (HDPE/PA6 = 75/25 parts) is studied. Virgin and filled blends were prepared by melt compounding the constituents using a twin-screw extruder. The influence of the organoclay on the morphology of the hybrid was deeply investigated by means of wide-angle X-ray diffractometry, transmission and scanning electron microscopies and quantitative extraction experiments. It has been found that the organoclay exclusively places inside the more hydrophilic polyamide phase during the melt compounding. The extrusion process promotes the formation of highly elongated and separated organoclay-rich PA6 domains. Despite its low volume fraction, the filled minor phase eventually merges once the extruded pellets are melted again, giving rise to a co-continuous microstructure. Remarkably, such a morphology persists for long time in the melt state. A possible compatibilizing action related to the organoclay has been investigated by comparing the morphology of the hybrid blend with that of a blend compatibilized using an ethylene–acrylic acid (EAA) copolymer as a compatibilizer precursor. The former remains phase separated, indicating that the filler does not promote the enhancement of the interfacial adhesion. The macroscopic properties of the hybrid blend were interpreted in the light of its morphology. The melt state dynamics of the materials were probed by means of linear viscoelastic measurements. Many peculiar rheological features of polymer-layered silicate nanocomposites based on single polymer matrix were detected for the hybrid blend. The results have been interpreted proposing the existence of two distinct populations of dynamical species: HDPE not interacting with the filler, and a slower species, constituted by the organoclay-rich polyamide phase, which slackened dynamics stabilize the morphology in the melt state. In the solid state, both the reinforcement effect of the filler and the co-continuous microstructure promote the enhancement of the tensile modulus. Our results demonstrate that adding nanoparticles to polymer blends allows tailoring the final properties of the hybrid, potentially leading to high-performance materials which combine the advantages of polymer blends and the merits of polymer nanocomposites.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Grounded in Vroom’s motivational framework of performance, we examine the interactive influence of collective human capital (ability) and aggregated service orientation (motivation) on the cross-level relationship between high-performance work systems (HPWS) and individual-level service quality. Results of hierarchical linear modeling (HLM) revealed that HPWS related to collective human capital and aggregated service orientation, which in turn related to individual-level service quality. Furthermore, both HLM and ordinary least squares regression analyses revealed a cross-level interaction effect of collective human capital and aggregated service orientation such that high levels of collective human capital and aggregated service orientation influence individual-level service quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article proposes a frequency agile antenna whose operating frequency band can be switched. The design is based on a Vivaldi antenna. High-performance radio-frequency microelectromechanical system (RF-MEMS) switches are used to realize the 2.7 GHz and 3.9 GHz band switching. The low band starts from 2.33 GHz and works until 3.02 GHz and the high band ranges from 3.29 GHz up to 4.58 GHz. The average gains of the antenna at the low and high bands are 10.9 and 12.5 dBi, respectively. This high-gain frequency reconfigurable antenna could replace several narrowband antennas for reducing costs and space to support multiple communication systems, while maintaining good performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A refractive index sensing system has been demonstrated, which is based upon an in-line fibre long period grating Mach-Zehnder interferometer with a heterodyne interrogation technique. This sensing system has comparable accuracy to laboratory-based techniques used in industry such as high performance liquid chromatography and UV spectroscopy. The advantage of this system is that measurements can be made in-situ for applications in continuous process control. Compared to other refractive index sensing schemes using LPGs, this approach has two main advantages. Firstly, the system relies on a simple optical interrogation system and therefore has the real potential for being low cost, and secondly, so far as we are aware it provides the highest refractive index resolution reported for any fibre LPG device.