983 resultados para distributed work


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Computational Biophysics Group at the Universitat Pompeu Fabra (GRIB-UPF) hosts two unique computational resources dedicated to the execution of large scale molecular dynamics (MD) simulations: (a) the ACMD molecular-dynamics software, used on standard personal computers with graphical processing units (GPUs); and (b) the GPUGRID. net computing network, supported by users distributed worldwide that volunteer GPUs for biomedical research. We leveraged these resources and developed studies, protocols and open-source software to elucidate energetics and pathways of a number of biomolecular systems, with a special focus on flexible proteins with many degrees of freedom. First, we characterized ion permeation through the bactericidal model protein Gramicidin A conducting one of the largest studies to date with the steered MD biasing methodology. Next, we addressed an open problem in structural biology, the determination of drug-protein association kinetics; we reconstructed the binding free energy, association, and dissaciociation rates of a drug like model system through a spatial decomposition and a Makov-chain analysis. The work was published in the Proceedings of the National Academy of Sciences and become one of the few landmark papers elucidating a ligand-binding pathway. Furthermore, we investigated the unstructured Kinase Inducible Domain (KID), a 28-peptide central to signalling and transcriptional response; the kinetics of this challenging system was modelled with a Markovian approach in collaboration with Frank Noe’s group at the Freie University of Berlin. The impact of the funding includes three peer-reviewed publication on high-impact journals; three more papers under review; four MD analysis components, released as open-source software; MD protocols; didactic material, and code for the hosting group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a distributed key distribution scheme, a set of servers helps a set of users in a group to securely obtain a common key. Security means that an adversary who corrupts some servers and some users has no information about the key of a noncorrupted group. In this work, we formalize the security analysis of one such scheme which was not considered in the original proposal. We prove the scheme is secure in the random oracle model, assuming that the Decisional Diffie-Hellman (DDH) problem is hard to solve. We also detail a possible modification of that scheme and the one in which allows us to prove the security of the schemes without assuming that a specific hash function behaves as a random oracle. As usual, this improvement in the security of the schemes is at the cost of an efficiency loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective Exposure to bioaerosols in the occupational environment of sawmills could be associated with a wide range of health effects, in particular respiratory impairment, allergy and organic dust toxic syndrome. The objective of the study was to assess the frequency of medical respiratory and general symptoms and their relation to bioaerosol exposure. Method Twelve sawmills in the French part of Switzerland were investigated and the relationship between levels of bioaerosols (wood dust, airborne bacteria, airborne fungi and endotoxins), medical symptoms and impaired lung function was explored. A health questionnaire was distributed to 111 sawmill workers. Results The concentration of airborne fungi exceeded the limit recommended by the Swiss National Insurance (SUVA) in the twelve sawmills. This elevated fungi level significantly influenced the occurrence of bronchial syndrome (defined by cough and expectorations). No other health effects (irritations or respiratory effects) could be associated to the measured exposures. We observed that junior workers showed significantly more irritation syndrome (defined by itching/running nose, snoring and itching/red eyes) than senior workers. Lung function tests were not influenced by bioaerosol levels nor dust exposure levels. Conclusion Results suggest that occupational exposure to wood dust in a Swiss sawmill does not promote a clinically relevant decline in lung function. However, the occurrence of bronchial syndrome is strongly influenced by airborne fungi levels. [Authors]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An Actively Heated Fiber Optics (AHFO) method to estimate soil moisture is tested and the analysis technique improved on. The measurements were performed in a lysimeter uniformly packed with loam soil with variable water content profiles. In the first meter of the soil profi le, 30 m of fiber optic cable were installed in a 12 loops coil. The metal sheath armoring the fiber cable was used as an electrical resistance heater to generate a heat pulse, and the soil response was monitored with a Distributed Temperature Sensing (DTS) system. We study the cooling following three continuous heat pulses of 120 s at 36 W m(-1) by means of long-time approximation of radial heat conduction. The soil volumetric water contents were then inferred from the estimated thermal conductivities through a specifically calibrated model relating thermal conductivity and volumetric water content. To use the pre-asymptotic data we employed a time correction that allowed the volumetric water content to be estimated with a precision of 0.01-0.035 (m(3) m(-3)). A comparison of the AHFO measurements with soil-moisture measurements obtained with calibrated capacitance-based probes gave good agreement for wetter soils [discrepancy between the two methods was less than 0.04 (m(3) m(-3))]. In the shallow drier soils, the AHFO method underestimated the volumetric water content due to the longertime required for the temperature increment to become asymptotic in less thermally conductive media [discrepancy between the two methods was larger than 0.1 (m(3) m(-3))]. The present work suggests that future applications of the AHFO method should include longer heat pulses, that longer heating and cooling events are analyzed, and, temperature increments ideally be measured with higher frequency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen päätavoitteena on ollut selvittää miten kansallinen ja organisaatiokulttuuri, niihin liittyvät normit ja arvot edesauttavat tai vaikeuttavat luottamuksen kehittymistä monikulttuurisissa tiimeissä maailmanlaajuisessa organisaatiossa. Tutkimuksen avulla haluttiin myös selvittää miten luottamus kehittyy hajautetuissa monikansallisissa tiimeissä WorldCom Internationalissa. Empiirinen tutkimusmenetelmä perustuu kvalitatiivisiin teemahaastatteluihin, jotka tehtiin WorldComin työntekijöille. Tutkimuksessa havaittiin, ettei yhteisten sosiaalisten normien merkitys luottamuksen syntymiselle ole niin merkittävä, koska WorldComin yhtenäiset toimintatavat sekä hallitseva amerikkalaisen emoyhtiön "kotikulttuuri" muodostavat yhtenäiset toimintalinjat tiimeissä. Tietokonevälitteisen kommunikoinnin jatkuva käyttö on edesauttanut työntekijöiden ns. sosiaalisen älyn kehittymistä, sillä henkilökohtaisen tapaamisen puuttuminen kehittää vastaavasti taitoja aistia ja tulkita sähköpostien tai puhelinneuvotteluiden aikana välittyviä vastapuolen "näkymättömiä" vihjeitä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the present work the bifurcational behaviour of the solutions of Rayleigh equation and corresponding spatially distributed system is being analysed. The conditions of oscillatory and monotonic loss of stability are obtained. In the case of oscillatory loss of stability, the analysis of linear spectral problem is being performed. For nonlinear problem, recurrent formulas for the general term of the asymptotic approximation of the self-oscillations are found, the stability of the periodic mode is analysed. Lyapunov-Schmidt method is being used for asymptotic approximation. The correlation between periodic solutions of ODE and PDE is being investigated. The influence of the diffusion on the frequency of self-oscillations is being analysed. Several numerical experiments are being performed in order to support theoretical findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis the bifurcational behavior of the solutions of Langford system is analysed. The equilibriums of the Langford system are found, and the stability of equilibriums is discussed. The conditions of loss of stability are found. The periodic solution of the system is approximated. We consider three types of boundary condition for Langford spatially distributed system: Neumann conditions, Dirichlet conditions and Neumann conditions with additional requirement of zero average. We apply the Lyapunov-Schmidt method to Langford spatially distributed system for asymptotic approximation of the periodic mode. We analyse the influence of the diffusion on the behavior of self-oscillations. As well in the present work we perform numerical experiments and compare it with the analytical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'objectif de cette thèse est de présenter différentes applications du programme de recherche de calcul conditionnel distribué. On espère que ces applications, ainsi que la théorie présentée ici, mènera à une solution générale du problème d'intelligence artificielle, en particulier en ce qui a trait à la nécessité d'efficience. La vision du calcul conditionnel distribué consiste à accélérer l'évaluation et l'entraînement de modèles profonds, ce qui est très différent de l'objectif usuel d'améliorer sa capacité de généralisation et d'optimisation. Le travail présenté ici a des liens étroits avec les modèles de type mélange d'experts. Dans le chapitre 2, nous présentons un nouvel algorithme d'apprentissage profond qui utilise une forme simple d'apprentissage par renforcement sur un modèle d'arbre de décisions à base de réseau de neurones. Nous démontrons la nécessité d'une contrainte d'équilibre pour maintenir la distribution d'exemples aux experts uniforme et empêcher les monopoles. Pour rendre le calcul efficient, l'entrainement et l'évaluation sont contraints à être éparse en utilisant un routeur échantillonnant des experts d'une distribution multinomiale étant donné un exemple. Dans le chapitre 3, nous présentons un nouveau modèle profond constitué d'une représentation éparse divisée en segments d'experts. Un modèle de langue à base de réseau de neurones est construit à partir des transformations éparses entre ces segments. L'opération éparse par bloc est implémentée pour utilisation sur des cartes graphiques. Sa vitesse est comparée à deux opérations denses du même calibre pour démontrer le gain réel de calcul qui peut être obtenu. Un modèle profond utilisant des opérations éparses contrôlées par un routeur distinct des experts est entraîné sur un ensemble de données d'un milliard de mots. Un nouvel algorithme de partitionnement de données est appliqué sur un ensemble de mots pour hiérarchiser la couche de sortie d'un modèle de langage, la rendant ainsi beaucoup plus efficiente. Le travail présenté dans cette thèse est au centre de la vision de calcul conditionnel distribué émis par Yoshua Bengio. Elle tente d'appliquer la recherche dans le domaine des mélanges d'experts aux modèles profonds pour améliorer leur vitesse ainsi que leur capacité d'optimisation. Nous croyons que la théorie et les expériences de cette thèse sont une étape importante sur la voie du calcul conditionnel distribué car elle cadre bien le problème, surtout en ce qui concerne la compétitivité des systèmes d'experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.