831 resultados para distributed conditional computation
Resumo:
Access control is a fundamental concern in any system that manages resources, e.g., operating systems, file systems, databases and communications systems. The problem we address is how to specify, enforce, and implement access control in distributed environments. This problem occurs in many applications such as management of distributed project resources, e-newspaper and payTV subscription services. Starting from an access relation between users and resources, we derive a user hierarchy, a resource hierarchy, and a unified hierarchy. The unified hierarchy is then used to specify the access relation in a way that is compact and that allows efficient queries. It is also used in cryptographic schemes that enforce the access relation. We introduce three specific cryptography based hierarchical schemes, which can effectively enforce and implement access control and are designed for distributed environments because they do not need the presence of a central authority (except perhaps for set- UP).
Resumo:
Wireless LANs are growing rapidly and security has always been a concern. We have implemented a hybrid system, which will not only detect active attacks like identity theft causing denial of service attacks, but will also detect the usage of access point discovery tools. The system responds in real time by sending out an alert to the network administrator.
Resumo:
Centralized and Distributed methods are two connection management schemes in wavelength convertible optical networks. In the earlier work, the centralized scheme is said to have lower network blocking probability than the distributed one. Hence, much of the previous work in connection management has focused on the comparison of different algorithms in only distributed scheme or in only centralized scheme. However, we believe that the network blocking probability of these two connection management schemes depends, to a great extent, on the network traffic patterns and reservation times. Our simulation results reveal that the performance improvement (in terms of blocking probability) of centralized method over distributed method is inversely proportional to the ratio of average connection interarrival time to reservation time. After that ratio increases beyond a threshold, those two connection management schemes yield almost the same blocking probability under the same network load. In this paper, we review the working procedure of distributed and centralized schemes, discuss the tradeoff between them, compare these two methods under different network traffic patterns via simulation and give our conclusion based on the simulation data.
Resumo:
Past research has demonstrated emergent conditional relations using a go/no-go procedure with pairs of figures displayed side-by-side on a computer screen. The present Study sought to extend applications Of this procedure. In Experiment, 1, we evaluated whether emergent conditional relations Could be demonstrated when two-component stimuli were displayed in figure-ground relationships-abstract figures displayed on backgrounds of different colors. Five normal)), capable adults participated. During training, each two-component stimulus Was presented successively. Responses emitted in the presence of some Stimulus pairs (A1B1, A2B2, A3B3, B1C1, B2C2 and B3C3) were reinforced, whereas responses emitted in the presence of other pairs (A1B2, A1B3, A2B1, A2B3, A3B1, A3B2, B1C2, B1C3, B2C1, B2C3, B3C1 and B3C2) were not. During tests, new configurations (AC and CA) were presented, thus emulating structurally the matching-to-sample tests employed in typical equivalence Studies. All participants showed emergent relations consistent with stimulus equivalence during testing. In Experiment 2, we systematically replicated the procedures with Stimulus compounds consisting Of four figures (A1, A2, C1 and C2) and two locations (left - B1 and right - 132). A,11 6 normally capable adults exhibited emergent stimulus-stimulus relations. Together, these experiments show that the go/no-go procedure is a potentially useful alternative for Studying emergent. conditional relations when matching-to-sample is procedurally cumbersome or impossible to use.
Resumo:
Despite their generality, conventional Volterra filters are inadequate for some applications, due to the huge number of parameters that may be needed for accurate modelling. When a state-space model of the target system is known, this can be assessed by computing its kernels, which also provides valuable information for choosing an adequate alternate Volterra filter structure, if necessary, and is useful for validating parameter estimation procedures. In this letter, we derive expressions for the kernels by using the Carleman bilinearization method, for which an efficient algorithm is given. Simulation results are presented, which confirm the usefulness of the proposed approach.
Resumo:
We examine the impact of Brazil's Bolsa Escola/Familia program on Brazilian children's education outcomes. Bolsa provides cash payments to poor households if their children (ages 6 to 15) are enrolled in school. Using school census data to compare changes in enrollment, dropping out and grade promotion across schools that adopted Bolsa at different times, we estimate that the program has: increased enrollment by about 5.5% (6.5%) in grades 1-4 (grades 5-8); lowered dropout rates by 0.5 (0.4) percentage points in grades 1-4 (grades 5-8); and raised grade promotion rates by 0.9 (0.3) percentage points in grades 1-4 (grades 5-8). About one third of Brazil's children participate in Bolsa, so assuming no spillover effects onto non-participants implies that Bolsa's impacts are three times higher than these estimates. However, simple calculations using enrollment impacts suggest that Bolsa's benefits in terms of increased wages may not exceed its costs. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this article we introduce a three-parameter extension of the bivariate exponential-geometric (BEG) law (Kozubowski and Panorska, 2005) [4]. We refer to this new distribution as the bivariate gamma-geometric (BGG) law. A bivariate random vector (X, N) follows the BGG law if N has geometric distribution and X may be represented (in law) as a sum of N independent and identically distributed gamma variables, where these variables are independent of N. Statistical properties such as moment generation and characteristic functions, moments and a variance-covariance matrix are provided. The marginal and conditional laws are also studied. We show that BBG distribution is infinitely divisible, just as the BEG model is. Further, we provide alternative representations for the BGG distribution and show that it enjoys a geometric stability property. Maximum likelihood estimation and inference are discussed and a reparametrization is proposed in order to obtain orthogonality of the parameters. We present an application to a real data set where our model provides a better fit than the BEG model. Our bivariate distribution induces a bivariate Levy process with correlated gamma and negative binomial processes, which extends the bivariate Levy motion proposed by Kozubowski et al. (2008) [6]. The marginals of our Levy motion are a mixture of gamma and negative binomial processes and we named it BMixGNB motion. Basic properties such as stochastic self-similarity and the covariance matrix of the process are presented. The bivariate distribution at fixed time of our BMixGNB process is also studied and some results are derived, including a discussion about maximum likelihood estimation and inference. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
The Distributed Software Development (DSD) is a development strategy that meets the globalization needs concerned with the increase productivity and cost reduction. However, the temporal distance, geographical dispersion and the socio-cultural differences, increased some challenges and, especially, added new requirements related with the communication, coordination and control of projects. Among these new demands there is the necessity of a software process that provides adequate support to the distributed software development. This paper presents an integrated approach of software development and test that considers distributed teams peculiarities. The approach purpose is to offer support to DSD, providing a better project visibility, improving the communication between the development and test teams, minimizing the ambiguity and difficulty to understand the artifacts and activities. This integrated approach was conceived based on four pillars: (i) to identify the DSD peculiarities concerned with development and test processes, (ii) to define the necessary elements to compose the integrated approach of development and test to support the distributed teams, (iii) to describe and specify the workflows, artifacts, and roles of the approach, and (iv) to represent appropriately the approach to enable the effective communication and understanding of it.
Resumo:
Background: Warfarin-dosing pharmacogenetic algorithms have presented different performances across ethnicities, and the impact in admixed populations is not fully known. Aims: To evaluate the CYP2C9 and VKORC1 polymorphisms and warfarin-predicted metabolic phenotypes according to both self-declared ethnicity and genetic ancestry in a Brazilian general population plus Amerindian groups. Methods: Two hundred twenty-two Amerindians (Tupinikin and Guarani) were enrolled and 1038 individuals from the Brazilian general population who were self-declared as White, Intermediate (Brown, Pardo in Portuguese), or Black. Samples of 274 Brazilian subjects from Sao Paulo were analyzed for genetic ancestry using an Affymetrix 6.0 (R) genotyping platform. The CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), and VKORC1 g.-1639G>A (rs9923231) polymorphisms were genotyped in all studied individuals. Results: The allelic frequency for the VKORC1 polymorphism was differently distributed according to self-declared ethnicity: White (50.5%), Intermediate (46.0%), Black (39.3%), Tupinikin (40.1%), and Guarani (37.3%) (p < 0.001), respectively. The frequency of intermediate plus poor metabolizers (IM + PM) was higher in White (28.3%) than in Intermediate (22.7%), Black (20.5%), Tupinikin (12.9%), and Guarani (5.3%), (p < 0.001). For the samples with determined ancestry, subjects carrying the GG genotype for the VKORC1 had higher African ancestry and lower European ancestry (0.14 +/- 0.02 and 0.62 +/- 0.02) than in subjects carrying AA (0.05 +/- 0.01 and 0.73 +/- 0.03) (p = 0.009 and 0.03, respectively). Subjects classified as IM + PM had lower African ancestry (0.08 +/- 0.01) than extensive metabolizers (0.12 +/- 0.01) (p = 0.02). Conclusions: The CYP2C9 and VKORC1 polymorphisms are differently distributed according to self-declared ethnicity or genetic ancestry in the Brazilian general population plus Amerindians. This information is an initial step toward clinical pharmacogenetic implementation, and it could be very useful in strategic planning aiming at an individual therapeutic approach and an adverse drug effect profile prediction in an admixed population.
Resumo:
The installation of induction distributed generators should be preceded by a careful study in order to determine if the point of common coupling is suitable for transmission of the generated power, keeping acceptable power quality and system stability. In this sense, this paper presents a simple analytical formulation that allows a fast and comprehensive evaluation of the maximum power delivered by the induction generator, without losing voltage stability. Moreover, this formulation can be used to identify voltage stability issues that limit the generator output power. All the formulation is developed by using the equivalent circuit of squirrel-cage induction machine. Simulation results are used to validate the method, which enables the approach to be used as a guide to reduce the simulation efforts necessary to assess the maximum output power and voltage stability of induction generators. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work evaluates the efficiency of economic levels of theory for the prediction of (3)J(HH) spin-spin coupling constants, to be used when robust electronic structure methods are prohibitive. To that purpose, DFT methods like mPW1PW91. B3LYP and PBEPBE were used to obtain coupling constants for a test set whose coupling constants are well known. Satisfactory results were obtained in most of cases, with the mPW1PW91/6-31G(d,p)//B3LYP/6-31G(d,p) leading the set. In a second step. B3LYP was replaced by the semiempirical methods PM6 and RM1 in the geometry optimizations. Coupling constants calculated with these latter structures were at least as good as the ones obtained by pure DFT methods. This is a promising result, because some of the main objectives of computational chemistry - low computational cost and time, allied to high performance and precision - were attained together. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Increasing age is associated with a reduction in overall heart rate variability as well as changes in complexity of physiologic dynamics. The aim of this study was to verify if the alterations in autonomic modulation of heart rate caused by the aging process could be detected by Shannon entropy (SE), conditional entropy (CE) and symbolic analysis (SA). Complexity analysis was carried out in 44 healthy subjects divided into two groups: old (n = 23, 63 +/- A 3 years) and young group (n = 21, 23 +/- A 2). It was analyzed SE, CE [complexity index (CI) and normalized CI (NCI)] and SA (0V, 1V, 2LV and 2ULV patterns) during short heart period series (200 cardiac beats) derived from ECG recordings during 15 min of rest in a supine position. The sequences characterized by three heart periods with no significant variations (0V), and that with two significant unlike variations (2ULV) reflect changes in sympathetic and vagal modulation, respectively. The unpaired t test (or Mann-Whitney rank sum test when appropriate) was used in the statistical analysis. In the aging process, the distributions of patterns (SE) remain similar to young subjects. However, the regularity is significantly different; the patterns are more repetitive in the old group (a decrease of CI and NCI). The amounts of pattern types are different: 0V is increased and 2LV and 2ULV are reduced in the old group. These differences indicate marked change of autonomic regulation. The CE and SA are feasible techniques to detect alteration in autonomic control of heart rate in the old group.
Resumo:
We analytically study the input-output properties of a neuron whose active dendritic tree, modeled as a Cayley tree of excitable elements, is subjected to Poisson stimulus. Both single-site and two-site mean-field approximations incorrectly predict a nonequilibrium phase transition which is not allowed in the model. We propose an excitable-wave mean-field approximation which shows good agreement with previously published simulation results [Gollo et al., PLoS Comput. Biol. 5, e1000402 (2009)] and accounts for finite-size effects. We also discuss the relevance of our results to experiments in neuroscience, emphasizing the role of active dendrites in the enhancement of dynamic range and in gain control modulation.
Resumo:
Failure detection is at the core of most fault tolerance strategies, but it often depends on reliable communication. We present new algorithms for failure detectors which are appropriate as components of a fault tolerance system that can be deployed in situations of adverse network conditions (such as loosely connected and administered computing grids). It packs redundancy into heartbeat messages, thereby improving on the robustness of the traditional protocols. Results from experimental tests conducted in a simulated environment with adverse network conditions show significant improvement over existing solutions.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.