994 resultados para hybrid computing roles


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein–protein interactions play crucial roles in the execution of various biological functions. Accordingly, their comprehensive description would contribute considerably to the functional interpretation of fully sequenced genomes, which are flooded with novel genes of unpredictable functions. We previously developed a system to examine two-hybrid interactions in all possible combinations between the ≈6,000 proteins of the budding yeast Saccharomyces cerevisiae. Here we have completed the comprehensive analysis using this system to identify 4,549 two-hybrid interactions among 3,278 proteins. Unexpectedly, these data do not largely overlap with those obtained by the other project [Uetz, P., et al. (2000) Nature (London) 403, 623–627] and hence have substantially expanded our knowledge on the protein interaction space or interactome of the yeast. Cumulative connection of these binary interactions generates a single huge network linking the vast majority of the proteins. Bioinformatics-aided selection of biologically relevant interactions highlights various intriguing subnetworks. They include, for instance, the one that had successfully foreseen the involvement of a novel protein in spindle pole body function as well as the one that may uncover a hitherto unidentified multiprotein complex potentially participating in the process of vesicular transport. Our data would thus significantly expand and improve the protein interaction map for the exploration of genome functions that eventually leads to thorough understanding of the cell as a molecular system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, parallel Relaxed and Extrapolated algorithms based on the Power method for accelerating the PageRank computation are presented. Different parallel implementations of the Power method and the proposed variants are analyzed using different data distribution strategies. The reported experiments show the behavior and effectiveness of the designed algorithms for realistic test data using either OpenMP, MPI or an hybrid OpenMP/MPI approach to exploit the benefits of shared memory inside the nodes of current SMP supercomputers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we use recent census data supplemented with case study evidence to investigate the extent to which professional computing occupations in Australia are constructed around the notion of an ‘ideal’ worker. Census data are used to compare computer professionals with other selected professional occupational groups, illustrating different models of accommodating (or not accommodating) workers who do not fit the ideal model. The computer professionals group is shown to be distinctive in combining low but consistent levels of female representation across age groups, average rates of parenthood and minimal provisions for working-time flexibility. One strategy employed by women in this environment is selection of relatively routine technical roles over more time intensive consultancy based work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Mesh Networks (WMNs), based on commodity hardware, present a promising technology for a wide range of applications due to their self-configuring and self-healing capabilities, as well as their low equipment and deployment costs. One of the key challenges that WMN technology faces is the limited capacity and scalability due to co-channel interference, which is typical for multi-hop wireless networks. A simple and relatively low-cost approach to address this problem is the use of multiple wireless network interfaces (radios) per node. Operating the radios on distinct orthogonal channels permits effective use of the frequency spectrum, thereby, reducing interference and contention. In this paper, we evaluate the performance of the multi-radio Ad-hoc On-demand Distance Vector (AODV) routing protocol with a specific focus on hybrid WMNs. Our simulation results show that under high mobility and traffic load conditions, multi-radio AODV offers superior performance as compared to its single-radio counterpart. We believe that multi-radio AODV is a promising candidate for WMNs, which need to service a large number of mobile clients with low latency and high bandwidth requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the formal definition of a novel Mobile Cloud Computing (MCC) extension of the Networked Autonomic Machine (NAM) framework, a general-purpose conceptual tool which describes large-scale distributed autonomic systems. The introduction of autonomic policies in the MCC paradigm has proved to be an effective technique to increase the robustness and flexibility of MCC systems. In particular, autonomic policies based on continuous resource and connectivity monitoring help automate context-aware decisions for computation offloading. We have also provided NAM with a formalization in terms of a transformational operational semantics in order to fill the gap between its existing Java implementation NAM4J and its conceptual definition. Moreover, we have extended NAM4J by adding several components with the purpose of managing large scale autonomic distributed environments. In particular, the middleware allows for the implementation of peer-to-peer (P2P) networks of NAM nodes. Moreover, NAM mobility actions have been implemented to enable the migration of code, execution state and data. Within NAM4J, we have designed and developed a component, denoted as context bus, which is particularly useful in collaborative applications in that, if replicated on each peer, it instantiates a virtual shared channel allowing nodes to notify and get notified about context events. Regarding the autonomic policies management, we have provided NAM4J with a rule engine, whose purpose is to allow a system to autonomously determine when offloading is convenient. We have also provided NAM4J with trust and reputation management mechanisms to make the middleware suitable for applications in which such aspects are of great interest. To this purpose, we have designed and implemented a distributed framework, denoted as DARTSense, where no central server is required, as reputation values are stored and updated by participants in a subjective fashion. We have also investigated the literature regarding MCC systems. The analysis pointed out that all MCC models focus on mobile devices, and consider the Cloud as a system with unlimited resources. To contribute in filling this gap, we defined a modeling and simulation framework for the design and analysis of MCC systems, encompassing both their sides. We have also implemented a modular and reusable simulator of the model. We have applied the NAM principles to two different application scenarios. First, we have defined a hybrid P2P/cloud approach where components and protocols are autonomically configured according to specific target goals, such as cost-effectiveness, reliability and availability. Merging P2P and cloud paradigms brings together the advantages of both: high availability, provided by the Cloud presence, and low cost, by exploiting inexpensive peers resources. As an example, we have shown how the proposed approach can be used to design NAM-based collaborative storage systems based on an autonomic policy to decide how to distribute data chunks among peers and Cloud, according to cost minimization and data availability goals. As a second application, we have defined an autonomic architecture for decentralized urban participatory sensing (UPS) which bridges sensor networks and mobile systems to improve effectiveness and efficiency. The developed application allows users to retrieve and publish different types of sensed information by using the features provided by NAM4J's context bus. Trust and reputation is managed through the application of DARTSense mechanisms. Also, the application includes an autonomic policy that detects areas characterized by few contributors, and tries to recruit new providers by migrating code necessary to sensing, through NAM mobility actions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We revisit the visibility problem, which is traditionally known in Computer Graphics and Vision fields as the process of computing a (potentially) visible set of primitives in the computational model of a scene. We propose a hybrid solution that uses a dry structure (in the sense of data reduction), a triangulation of the type Ja
 1 , to accelerate the task of searching for visible primitives. We came up with a solution that is useful for real-time, on-line, interactive applications as 3D visualization. In such applications the main goal is to load the minimum amount of primitives from the scene during the rendering stage, as possible. For this purpose, our algorithm executes the culling by using a hybrid paradigm based on viewing-frustum, back-face culling and occlusion models. Results have shown substantial improvement over these traditional approaches if applied separately. This novel approach can be used in devices with no dedicated processors or with low processing power, as cell phones or embedded displays, or to visualize data through the Internet, as in virtual museums applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increased longevity of humans and the demand for a better quality of life have led to a continuous search for new implant materials. Scientific development coupled with a growing multidisciplinarity between materials science and life sciences has given rise to new approaches such as regenerative medicine and tissue engineering. The search for a material with mechanical properties close to those of human bone produced a new family of hybrid materials that take advantage of the synergy between inorganic silica (SiO4) domains, based on sol-gel bioactive glass compositions, and organic polydimethylsiloxane, PDMS ((CH3)2.SiO2)n, domains. Several studies have shown that hybrid materials based on the system PDMS-SiO2 constitute a promising group of biomaterials with several potential applications from bone tissue regeneration to brain tissue recovery, passing by bioactive coatings and drug delivery systems. The objective of the present work was to prepare hybrid materials for biomedical applications based on the PDMS-SiO2 system and to achieve a better understanding of the relationship among the sol-gel processing conditions, the chemical structures, the microstructure and the macroscopic properties. For that, different characterization techniques were used: Fourier transform infrared spectrometry, liquid and solid state nuclear magnetic resonance techniques, X-ray diffraction, small-angle X-ray scattering, smallangle neutron scattering, surface area analysis by Brunauer–Emmett–Teller method, scanning electron microscopy and transmission electron microscopy. Surface roughness and wettability were analyzed by 3D optical profilometry and by contact angle measurements respectively. Bioactivity was evaluated in vitro by immersion of the materials in Kokubos’s simulated body fluid and posterior surface analysis by different techniques as well as supernatant liquid analysis by inductively coupled plasma spectroscopy. Biocompatibility was assessed using MG63 osteoblastic cells. PDMS-SiO2-CaO materials were first prepared using nitrate as a calcium source. To avoid the presence of nitrate residues in the final product due to its potential toxicity, a heat-treatment step (above 400 °C) is required. In order to enhance the thermal stability of the materials subjected to high temperatures titanium was added to the hybrid system, and a material containing calcium, with no traces of nitrate and the preservation of a significant amount of methyl groups was successfully obtained. The difficulty in eliminating all nitrates from bulk PDMS-SiO2-CaO samples obtained by sol-gel synthesis and subsequent heat-treatment created a new goal which was the search for alternative sources of calcium. New calcium sources were evaluated in order to substitute the nitrate and calcium acetate was chosen due to its good solubility in water. Preparation solgel protocols were tested and homogeneous monolithic samples were obtained. Besides their ability to improve the bioactivity, titanium and zirconium influence the structural and microstructural features of the SiO2-TiO2 and SiO2-ZrO2 binary systems, and also of the PDMS-TiO2 and PDMS-ZrO2 systems. Detailed studies with different sol-gel conditions allowed the understanding of the roles of titanium and zirconium as additives in the PDMS-SiO2 system. It was concluded that titanium and zirconium influence the kinetics of the sol-gel process due to their different alkoxide reactivity leading to hybrid xerogels with dissimilar characteristics and morphologies. Titanium isopropoxide, less reactive than zirconium propoxide, was chosen as source of titanium, used as an additive to the system PDMS-SiO2-CaO. Two different sol-gel preparation routes were followed, using the same base composition and calcium acetate as calcium source. Different microstructures with high hydrophobicit were obtained and both proved to be biocompatible after tested with MG63 osteoblastic cells. Finally, the role of strontium (typically known in bioglasses to promote bone formation and reduce bone resorption) was studied in the PDMS-SiO2-CaOTiO2 hybrid system. A biocompatible material, tested with MG63 osteoblastic cells, was obtained with the ability to release strontium within the values reported as suitable for bone tissue regeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with the hybridization of two graph coloring heuristics (Saturation Degree and Largest Degree), and their application within a hyperheuristic for exam timetabling problems. Hyper-heuristics can be seen as algorithms which intelligently select appropriate algorithms/heuristics for solving a problem. We developed a Tabu Search based hyper-heuristic to search for heuristic lists (of graph heuristics) for solving problems and investigated the heuristic lists found by employing knowledge discovery techniques. Two hybrid approaches (involving Saturation Degree and Largest Degree) including one which employs Case Based Reasoning are presented and discussed. Both the Tabu Search based hyper-heuristic and the hybrid approaches are tested on random and real-world exam timetabling problems. Experimental results are comparable with the best state-of-the-art approaches (as measured against established benchmark problems). The results also demonstrate an increased level of generality in our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. Most VS methods suppose a unique binding site for the target, but it has been demonstrated that diverse ligands interact with unrelated parts of the target and many VS methods do not take into account this relevant fact. This problem is circumvented by a novel VS methodology named BINDSURF that scans the whole protein surface in order to find new hotspots, where ligands might potentially interact with, and which is implemented in last generation massively parallel GPU hardware, allowing fast processing of large ligand databases. BINDSURF can thus be used in drug discovery, drug design, drug repurposing and therefore helps considerably in clinical research. However, the accuracy of most VS methods and concretely BINDSURF is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to improve accuracy of the scoring functions used in BINDSURF we propose a hybrid novel approach where neural networks (NNET) and support vector machines (SVM) methods are trained with databases of known active (drugs) and inactive compounds, being this information exploited afterwards to improve BINDSURF VS predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since its identification in the 1990s, the RNA interference (RNAi) pathway has proven extremely useful in elucidating the function of proteins in the context of cells and even whole organisms. In particular, this sequence-specific and powerful loss-of-function approach has greatly simplified the study of the role of host cell factors implicated in the life cycle of viruses. Here, we detail the RNAi method we have developed and used to specifically knock down the expression of ezrin, an actin binding protein that was identified by yeast two-hybrid screening to interact with the Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) spike (S) protein. This method was used to study the role of ezrin, specifically during the entry stage of SARS-CoV infection.