785 resultados para Distributed object
Resumo:
Pós-graduação em Design - FAAC
Resumo:
"Cornstalk disease" is the name given to the cause or causes of death of cattle allowed to run in fields of standing cornstalks from which the ears have been gathered. It is probable that "many different maladies have been included under this name." In Nebraska, however, there is such a similarity in the symptoms reported by the farmers that it seems probable that the great majority of the losses attributed to cornstalk disease are really due to some common cause. As to the exact nature of this cause nothing is known. However, various theories have been advanced, and methods of prevention or treatment based upon these theories have been described.
Resumo:
Access control is a fundamental concern in any system that manages resources, e.g., operating systems, file systems, databases and communications systems. The problem we address is how to specify, enforce, and implement access control in distributed environments. This problem occurs in many applications such as management of distributed project resources, e-newspaper and payTV subscription services. Starting from an access relation between users and resources, we derive a user hierarchy, a resource hierarchy, and a unified hierarchy. The unified hierarchy is then used to specify the access relation in a way that is compact and that allows efficient queries. It is also used in cryptographic schemes that enforce the access relation. We introduce three specific cryptography based hierarchical schemes, which can effectively enforce and implement access control and are designed for distributed environments because they do not need the presence of a central authority (except perhaps for set- UP).
Resumo:
Wireless LANs are growing rapidly and security has always been a concern. We have implemented a hybrid system, which will not only detect active attacks like identity theft causing denial of service attacks, but will also detect the usage of access point discovery tools. The system responds in real time by sending out an alert to the network administrator.
Resumo:
Centralized and Distributed methods are two connection management schemes in wavelength convertible optical networks. In the earlier work, the centralized scheme is said to have lower network blocking probability than the distributed one. Hence, much of the previous work in connection management has focused on the comparison of different algorithms in only distributed scheme or in only centralized scheme. However, we believe that the network blocking probability of these two connection management schemes depends, to a great extent, on the network traffic patterns and reservation times. Our simulation results reveal that the performance improvement (in terms of blocking probability) of centralized method over distributed method is inversely proportional to the ratio of average connection interarrival time to reservation time. After that ratio increases beyond a threshold, those two connection management schemes yield almost the same blocking probability under the same network load. In this paper, we review the working procedure of distributed and centralized schemes, discuss the tradeoff between them, compare these two methods under different network traffic patterns via simulation and give our conclusion based on the simulation data.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
We raise the possibility that the very dense, compact companion of PSR J1719-1438, which has a Jupiter-like mass, is an exotic quark object rather than a light helium or carbon white dwarf. The exotic hypothesis naturally explains some of the observed features, and provides quite strong predictions for this system, to be confirmed or refuted in feasible future studies.
Resumo:
The Distributed Software Development (DSD) is a development strategy that meets the globalization needs concerned with the increase productivity and cost reduction. However, the temporal distance, geographical dispersion and the socio-cultural differences, increased some challenges and, especially, added new requirements related with the communication, coordination and control of projects. Among these new demands there is the necessity of a software process that provides adequate support to the distributed software development. This paper presents an integrated approach of software development and test that considers distributed teams peculiarities. The approach purpose is to offer support to DSD, providing a better project visibility, improving the communication between the development and test teams, minimizing the ambiguity and difficulty to understand the artifacts and activities. This integrated approach was conceived based on four pillars: (i) to identify the DSD peculiarities concerned with development and test processes, (ii) to define the necessary elements to compose the integrated approach of development and test to support the distributed teams, (iii) to describe and specify the workflows, artifacts, and roles of the approach, and (iv) to represent appropriately the approach to enable the effective communication and understanding of it.
Resumo:
Background: Warfarin-dosing pharmacogenetic algorithms have presented different performances across ethnicities, and the impact in admixed populations is not fully known. Aims: To evaluate the CYP2C9 and VKORC1 polymorphisms and warfarin-predicted metabolic phenotypes according to both self-declared ethnicity and genetic ancestry in a Brazilian general population plus Amerindian groups. Methods: Two hundred twenty-two Amerindians (Tupinikin and Guarani) were enrolled and 1038 individuals from the Brazilian general population who were self-declared as White, Intermediate (Brown, Pardo in Portuguese), or Black. Samples of 274 Brazilian subjects from Sao Paulo were analyzed for genetic ancestry using an Affymetrix 6.0 (R) genotyping platform. The CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), and VKORC1 g.-1639G>A (rs9923231) polymorphisms were genotyped in all studied individuals. Results: The allelic frequency for the VKORC1 polymorphism was differently distributed according to self-declared ethnicity: White (50.5%), Intermediate (46.0%), Black (39.3%), Tupinikin (40.1%), and Guarani (37.3%) (p < 0.001), respectively. The frequency of intermediate plus poor metabolizers (IM + PM) was higher in White (28.3%) than in Intermediate (22.7%), Black (20.5%), Tupinikin (12.9%), and Guarani (5.3%), (p < 0.001). For the samples with determined ancestry, subjects carrying the GG genotype for the VKORC1 had higher African ancestry and lower European ancestry (0.14 +/- 0.02 and 0.62 +/- 0.02) than in subjects carrying AA (0.05 +/- 0.01 and 0.73 +/- 0.03) (p = 0.009 and 0.03, respectively). Subjects classified as IM + PM had lower African ancestry (0.08 +/- 0.01) than extensive metabolizers (0.12 +/- 0.01) (p = 0.02). Conclusions: The CYP2C9 and VKORC1 polymorphisms are differently distributed according to self-declared ethnicity or genetic ancestry in the Brazilian general population plus Amerindians. This information is an initial step toward clinical pharmacogenetic implementation, and it could be very useful in strategic planning aiming at an individual therapeutic approach and an adverse drug effect profile prediction in an admixed population.
Resumo:
The installation of induction distributed generators should be preceded by a careful study in order to determine if the point of common coupling is suitable for transmission of the generated power, keeping acceptable power quality and system stability. In this sense, this paper presents a simple analytical formulation that allows a fast and comprehensive evaluation of the maximum power delivered by the induction generator, without losing voltage stability. Moreover, this formulation can be used to identify voltage stability issues that limit the generator output power. All the formulation is developed by using the equivalent circuit of squirrel-cage induction machine. Simulation results are used to validate the method, which enables the approach to be used as a guide to reduce the simulation efforts necessary to assess the maximum output power and voltage stability of induction generators. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Failure detection is at the core of most fault tolerance strategies, but it often depends on reliable communication. We present new algorithms for failure detectors which are appropriate as components of a fault tolerance system that can be deployed in situations of adverse network conditions (such as loosely connected and administered computing grids). It packs redundancy into heartbeat messages, thereby improving on the robustness of the traditional protocols. Results from experimental tests conducted in a simulated environment with adverse network conditions show significant improvement over existing solutions.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
There is currently a strong interest in mirrorless lasing systems(1), in which the electromagnetic feedback is provided either by disorder (multiple scattering in the gain medium) or by order (multiple Bragg reflection). These mechanisms correspond, respectively, to random lasers(2) and photonic crystal lasers(3). The crossover regime between order and disorder, or correlated disorder, has also been investigated with some success(4-6). Here, we report one-dimensional photonic-crystal lasing (that is, distributed feedback lasing(7,8)) with a cold atom cloud that simultaneously provides both gain and feedback. The atoms are trapped in a one-dimensional lattice, producing a density modulation that creates a strong Bragg reflection with a small angle of incidence. Pumping the atoms with auxiliary beams induces four-wave mixing, which provides parametric gain. The combination of both ingredients generates a mirrorless parametric oscillation with a conical output emission, the apex angle of which is tunable with the lattice periodicity.
Resumo:
Synchronous distributed generators are prone to operate islanded after contingencies, which is usually not allowed due to safety and power-quality issues. Thus, there are several anti-islanding techniques; however, most of them present technical limitations so that they are likely to fail in certain situations. Therefore, it is important to quantify and determine whether the scheme under study is adequate or not. In this context, this paper proposes an index to evaluate the effectiveness of anti-islanding frequency-based relays commonly used to protect synchronous distributed generators. The method is based on the calculation of a numerical index that indicates the time period that the system is unprotected against islanding considering the global period of analysis. Although this index can precisely be calculated based on several electromagnetic transient simulations, a practical method is also proposed to calculate it directly from simple analytical formulas or lookup tables. The results have shown that the proposed approach can assist distribution engineers to assess and set anti-islanding protection schemes.
Resumo:
We investigated the effects of texture gradient and the position of test stimulus in relation to the horizon on the perception of relative sizes. By using the staircase method, 50 participants adjusted the size of a bar presented above, below or on the horizon as it could be perceived in the same size of a bar presented in the lower visual field. Stimuli were presented during 100ms on five background conditions. Perspective gradient contributed more to the overestimation of relative sizes than compression gradient. The sizes of the objects which intercepted the horizon line were overestimated. Visual system was very effective in extracting information from perspective depth cues, making it even during very brief exposure.