888 resultados para Grid computing environment
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
The explosive growth of Internet during the last years has been reflected in the ever-increasing amount of the diversity and heterogeneity of user preferences, types and features of devices and access networks. Usually the heterogeneity in the context of the users which request Web contents is not taken into account by the servers that deliver them implying that these contents will not always suit their needs. In the particular case of e-learning platforms this issue is especially critical due to the fact that it puts at stake the knowledge acquired by their users. In the following paper we present a system that aims to provide the dotLRN e-learning platform with the capability to adapt to its users context. By integrating dotLRN with a multi-agent hypermedia system, online courses being undertaken by students as well as their learning environment are adapted in real time
Resumo:
El presente documento introduce a las pequeñas y medianas empresas en el mundo de la virtualización y el cloud computing. Partiendo de la presentación de ambas tecnologías, se recorren las diferentes fases por las que atraviesa un proyecto tecnológico consistente en la instalación de una plataforma virtualizada que alberga los sistemas informáticos básicos en una PYME.
Resumo:
Different procedures to obtain atom condensed Fukui functions are described. It is shown how the resulting values may differ depending on the exact approach to atom condensed Fukui functions. The condensed Fukui function can be computed using either the fragment of molecular response approach or the response of molecular fragment approach. The two approaches are nonequivalent; only the latter approach corresponds in general with a population difference expression. The Mulliken approach does not depend on the approach taken but has some computational drawbacks. The different resulting expressions are tested for a wide set of molecules. In practice one must make seemingly arbitrary choices about how to compute condensed Fukui functions, which suggests questioning the role of these indicators in conceptual density-functional theory
Resumo:
INTRODUCTION: Anhedonia is defined as a diminished capacity to experience pleasant emotion and is commonly included among the negative symptoms of schizophrenia. However, if patients report experiencing a lower level of pleasure than controls, they report experiencing as much pleasure as controls with online measurements of emotion. OBJECTIVE: The Temporal Experience of Pleasure Scale (TEPS) measures pleasure experienced in the moment and in anticipation of future activities. The TEPS is an 18-item self-report measurement of anticipatory (10 items) and consummatory (eight items) pleasure. The goal of this paper is to assess the psychometric characteristics of the French translation of this scale. METHODS: A control sample was composed of 60 women and 22 men, with a mean age of 38.1 years (S.D.: 10.8). Thirty-six were without qualification and 46 with qualified professional diploma. A sample of 21 patients meeting DSM IV-TR criteria for schizophrenia was recruited among the community psychiatry service of the department of psychiatry in Lausanne. They were five women and 16 men; mean age was of 34.1 years (S.D.: 7.5). Ten obtained a professional qualification and 11 were without qualification. None worked in competitive employment. Their mean dose of chlorpromazine equivalent was 431mg (S.D.: 259). All patients were on atypical antipsychotics. The control sample fulfilled the TEPS and the Physical Anhedonia Scale (PAS). The patient sample fulfilled the TEPS and was independently rated on the Calgary Depression Scale and the Scale for Assessment of Negative Symptoms. For comparison with controls, patients were matched on age, sex and professional qualification. This required the supplementary recruitment of two control subjects. RESULTS: Results with the control sample indicate that the TEPS presents an acceptable internal validity with Crombach alphas of 0.84 for the total scale, 0.74 for the anticipatory pleasure scale and 0.79 for the consummatory pleasure scale. The confirmatory factor analysis indicated that the model is well adapted to our data (chi(2)/dl=1.333; df=134; p<0.0006; root mean square residual, RMSEA=0.064). External validity measured with the PAS showed R=-0.27 (p<0.05) for the consummatory scale and R=-0.26 for the total score. Comparisons between patients and matched controls indicated that patients were significantly lower than control on anticipatory pleasure (t=2.7, df(40), 2-tailed p=0.01; cohen's d=0.83) and on total score of the TEPS (t=2.8, df (40), 2-tailed p=0.01; cohen's d=0.87). The two samples did not differ on consummatory pleasure. The anticipatory pleasure factor and the total TEPS showed significant negative correlation with the SANS anhedonia, respectively R=-0.78 (p<0.01) for the anticipatory factor and R=-0.61 (p<0.01) for the total TEPS. There was also a negative correlation between the anticipatory factor and the SANS avolition of R=-0.50 (p<0.05). These correlations were maintained, with partial correlations controlling for depression and chlorpromazine equivalents. CONCLUSION: The results of this validation show that the French version of the TEPS has psychometric characteristics similar to the original version. These results highlight the discrepancy between results of direct or indirect report of experienced pleasure in patients with schizophrenia. Patients may have difficulties in anticipating the pleasure of future enjoyable activities, but not in experiencing pleasure once in an enjoyable activity. Medication and depression do not seems to modify our results, but this should be better controlled in a longitudinal study. The anticipatory versus consummatory pleasure distinction appears to be useful for the development of new psychosocial interventions, tailored to improve desire in patients suffering from schizophrenia. Major limitations of the study are the small size of patient sample and the under representation of men in the control sample.
Resumo:
One of the most relevant difficulties faced by first-year undergraduate students is to settle into the educational environment of universities. This paper presents a case study that proposes a computer-assisted collaborative experience designed to help students in their transition from high school to university. This is done by facilitating their first contact with the campus and its services, the university community, methodologies and activities. The experience combines individual and collaborative activities, conducted in and out of the classroom, structured following the Jigsaw Collaborative Learning Flow Pattern. A specific environment including portable technologies with network and computer applications has been developed to support and facilitate the orchestration of a flow of learning activities into a single integrated learning setting. The result is a Computer-Supported Collaborative Blended Learning scenario, which has been evaluated with first-year university students of the degrees of Software and Audiovisual Engineering within the subject Introduction to Information and Communications Technologies. The findings reveal that the scenario improves significantly students’ interest in their studies and their understanding about the campus and services provided. The environment is also an innovative approach to successfully support the heterogeneous activities conducted by both teachers and students during the scenario. This paper introduces the goals and context of the case study, describes how the technology was employed to conduct the learning scenario, the evaluation methods and the main results of the experience.
Resumo:
The alignment between competences, teaching-learning methodologies and assessment is a key element of the European Higher Education Area. This paper presents the efforts carried out by six Telematics, Computer Science and Electronic Engineering Education teachers towards achieving this alignment in their subjects. In a joint work with pedagogues, a set of recommended actions were identified. A selection of these actions were applied and evaluated in the six subjects. The cross-analysis of the results indicate that the actions allow students to better understand the methodologies and assessment planned for the subjects, facilitate (self-) regulation and increase students’ involvement in the subjects.
Resumo:
The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.
Resumo:
Classical planning has been notably successful in synthesizing finite plans to achieve states where propositional goals hold. In the last few years, classical planning has also been extended to incorporate temporally extended goals, expressed in temporal logics such as LTL, to impose restrictions on the state sequences generated by finite plans. In this work, we take the next step and consider the computation of infinite plans for achieving arbitrary LTL goals. We show that infinite plans can also be obtained efficiently by calling a classical planner once over a classical planning encoding that represents and extends the composition of the planningdomain and the B¨uchi automaton representingthe goal. This compilation scheme has been implemented and a number of experiments are reported.
Resumo:
The objective of this study was to quantify the colony forming units (cfu) on latex procedure gloves in the beginning, middle, and end of the containers in real (professional) and controlled (researcher) gloving situations; evaluate the microbial load of the gloves, considering the time of exposure in the environment. This comparative prospective study was conducted at an intensive care unit of a teaching hospital. The microbiological data was collected from the gloves using digital-pressure. Microbiological evaluations were performed on 186 pairs of gloves: 93 in the control group and 93 in real gloving situations. In the control group, the average cfu was 4.7 against 6.2 in the real gloving situation. Hence, no statistically significant difference was found (p=.601). In addition, the cfu values of gloves in the beginning, middle and end of the containers also did not show any significant differences (p>.05). The most common strain was Staphylococcus spp. The time of exposure in the environment did not increase the cfu value of the latex gloves.
Resumo:
Solid-phase extraction (SPE) in tandem with dispersive liquid-liquid microextraction (DLLME) has been developed for the determination of mononitrotoluenes (MNTs) in several aquatic samples using gas chromatography-flame ionization (GC-FID) detection system. In the hyphenated SPE-DLLME, initially MNTs were extracted from a large volume of aqueous samples (100 mL) into a 500-mg octadecyl silane (C(18) ) sorbent. After the elution of analytes from the sorbent with acetonitrile, the obtained solution was put under the DLLME procedure, so that the extra preconcentration factors could be achieved. The parameters influencing the extraction efficiency such as breakthrough volume, type and volume of the elution solvent (disperser solvent) and extracting solvent, as well as the salt addition, were studied and optimized. The calibration curves were linear in the range of 0.5-500 μg/L and the limit of detection for all analytes was found to be 0.2 μg/L. The relative standard deviations (for 0.75 μg/L of MNTs) without internal standard varied from 2.0 to 6.4% (n=5). The relative recoveries of the well, river and sea water samples, spiked at the concentration level of 0.75 μg/L of the analytes, were in the range of 85-118%.
Global mass wasting during the Middle Ordovician: Meteoritic trigger or plate-tectonic environment ?
Resumo:
Mass wasting at continental margins on a global scale during the Middle Ordovician has recently been related to high meteorite influx. Although a high meteorite influx during the Ordovician should not be neglected, we challenge the idea that mass wasting was mainly produced by meteorite impacts over a period of almost 10 Ma. Having strong arguments against the impact-related hypothesis, we propose an alternative explanation, which is based on a re-evaluation of the mass wasting sites, considering their plate-tectonic distribution and the global sea level curve. A striking and important feature is the distribution of most of the mass wasting sites along continental margins characterised by periods of magmatism, terrane accretion and continental or back-arc rifting, respectively, related to subduction of oceanic lithosphere. Such processes are commonly connected with seismic activity causing earthquakes, which can cause downslope movement of sediment and rock. Considering all that, it seems more likely that most of this mass wasting was triggered by earthquakes related to plate-tectonic processes, which caused destabilisation of continental margins resulting in megabreccias and debris flows. Moreover, the period of mass wasting coincides with sea level drops during global sea level lowstand. In some cases, sea level drops can release pore-water overpressure reducing sediment strength and hence promoting instability of sediment at continental margins. Reduced pore-water overpressure can also destabilise gas hydrate-bearing sediment, causing slope failure, and thus resulting in submarine mass wasting. Overall, the global mass wasting during the Middle Ordovician does not need meteoritic trigger. (C) 2010 International Association for Gondwana Research. Published by Elsevier B.V. All rights reserved.