791 resultados para Tolerant computing
Resumo:
In this article, the authors investigate, from an interdisciplinary perspective, possible ethical implications of the presence of ubiquitous computing systems in human perception/action. The term ubiquitous computing is used to characterize information-processing capacity from computers that are available everywhere and all the time, integrated into everyday objects and activities. The contrast in approach to aspects of ubiquitous computing between traditional considerations of ethical issues and the Ecological Philosophy view concerning its possible consequences in the context of perception/action are the underlying themes of this paper. The focus is on an analysis of how the generalized dissemination of microprocessors in embedded systems, commanded by a ubiquitous computing system, can affect the behaviour of people considered as embodied embedded agents.
Resumo:
Changes in protein content, peroxidase activity, and isozyme profiles in response to soybean aphid feeding were documented at V1 (fully developed leaves at unifoliate node, first trifoliate leaf unrolled) and V3 (fully developed leaf at second trifoliate node, third trifoliate leaf unrolled) stages of soybean aphid-tolerant (KS4202) and -susceptible (SD76R) soybeans. Protein content was similar between infested and control V1 and V3 stage plants for both KS4202 and SD76R at 6, 16, and 22 d after aphid introduction. Enzyme kinetics studies documented that control and aphid-infested KS4202 V1 stage and SD76R V1 and V3 stages had similar levels of peroxidase activity at the three time points evaluated. In contrast, KS4202 aphid-infested plants at the V3 stage had significantly higher peroxidase activity levels than control plants at 6 and 22 d after aphid introduction. The differences in peroxidase activity observed between infested and control V3 stage KS4202 plants at these two time points suggest that peroxidases may be playing multiple roles in the tolerant plant. Native gels stained for peroxidase were able to detect differences in the isozyme profiles of aphid-infested and control plants for both KS4202 and SD76R.
Resumo:
The technologies are rapidly developing, but some of them present in the computers, as for instance their processing capacity, are reaching their physical limits. It is up to quantum computation offer solutions to these limitations and issues that may arise. In the field of information security, encryption is of paramount importance, being then the development of quantum methods instead of the classics, given the computational power offered by quantum computing. In the quantum world, the physical states are interrelated, thus occurring phenomenon called entanglement. This study presents both a theoretical essay on the merits of quantum mechanics, computing, information, cryptography and quantum entropy, and some simulations, implementing in C language the effects of entropy of entanglement of photons in a data transmission, using Von Neumann entropy and Tsallis entropy.
Resumo:
The physics of plasmas encompasses basic problems from the universe and has assured us of promises in diverse applications to be implemented in a wider range of scientific and engineering domains, linked to most of the evolved and evolving fundamental problems. Substantial part of this domain could be described by R–D mechanisms involving two or more species (reaction–diffusion mechanisms). These could further account for the simultaneous non-linear effects of heating, diffusion and other related losses. We mention here that in laboratory scale experiments, a suitable combination of these processes is of vital importance and very much decisive to investigate and compute the net behaviour of plasmas under consideration. Plasmas are being used in the revolution of information processing, so we considered in this technical note a simple framework to discuss and pave the way for better formalisms and Informatics, dealing with diverse domains of science and technologies. The challenging and fascinating aspects of plasma physics is that it requires a great deal of insight in formulating the relevant design problems, which in turn require ingenuity and flexibility in choosing a particular set of mathematical (and/or experimental) tools to implement them.
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
The exposure to unethical and unprofessional behavior is thought to play a major role in the declining empathy experienced by medical students during their training. We reflect on the reasons why medical schools are tolerant of unethical behavior of faculty. First, there are barriers to reporting unprofessional behavior within medical schools including fear of retaliation and lack of mechanisms to ensure anonymity. Second, deans and directors do not want to look for unethical behavior in their colleagues. Third, most of us have learned to take disrespectful circumstances in health care institutions for granted. Fourth, the accreditation of medical schools around the world does not usually cover the processes or outcomes associated with fostering ethical behavior in students. Several initiatives promise to change that picture.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Establishing a fault-tolerant connection in a network involves computation of diverse working and protection paths. The Shared Risk Link Group (SRLG) [1] concept is used to model several types of failure conditions such as link, node, fiber conduit, etc. In this work we focus on the problem of computing optimal SRLG/link diverse paths under shared protection. Shared protection technique improves network resource utilization by allowing protection paths of multiple connections to share resources. In this work we propose an iterative heuristic for computing SRLG/link diverse paths. We present a method to calculate a quantitative measure that provides a bounded guarantee on the optimality of the diverse paths computed by the heuristic. The experimental results on computing link diverse paths show that our proposed heuristic is efficient in terms of number of iterations required (time taken) to compute diverse paths when compared to other previously proposed heuristics.
Resumo:
One of the important issues in establishing a fault tolerant connection in a wavelength division multiplexing optical network is computing a pair of disjoint working and protection paths and a free wavelength along the paths. While most of the earlier research focused only on computing disjoint paths, in this work we consider computing both disjoint paths and a free wavelength along the paths. The concept of dependent cost structure (DCS) of protection paths to enhance their resource sharing ability was proposed in our earlier work. In this work we extend the concept of DCS of protection paths to wavelength continuous networks. We formalize the problem of computing disjoint paths with DCS in wavelength continuous networks and prove that it is NP-complete. We present an iterative heuristic that uses a layered graph model to compute disjoint paths with DCS and identify a free wavelength.
Resumo:
This paper proposes an evolutionary computing strategy to solve the problem of fault indicator (FI) placement in primary distribution feeders. More specifically, a genetic algorithm (GA) is employed to search for an efficient configuration of FIs, located at the best positions on the main feeder of a real-life distribution system. Thus, the problem is modeled as one of optimization, aimed at improving the distribution reliability indices, while, at the same time, finding the least expensive solution. Based on actual data, the results confirm the efficiency of the GA approach to the FI placement problem.
Resumo:
Reasoning and change over inconsistent knowledge bases (KBs) is of utmost relevance in areas like medicine and law. Argumentation may bring the possibility to cope with both problems. Firstly, by constructing an argumentation framework (AF) from the inconsistent KB, we can decide whether to accept or reject a certain claim through the interplay among arguments and counterarguments. Secondly, by handling dynamics of arguments of the AF, we might deal with the dynamics of knowledge of the underlying inconsistent KB. Dynamics of arguments has recently attracted attention and although some approaches have been proposed, a full axiomatization within the theory of belief revision was still missing. A revision arises when we want the argumentation semantics to accept an argument. Argument Theory Change (ATC) encloses the revision operators that modify the AF by analyzing dialectical trees-arguments as nodes and attacks as edges-as the adopted argumentation semantics. In this article, we present a simple approach to ATC based on propositional KBs. This allows to manage change of inconsistent KBs by relying upon classical belief revision, although contrary to it, consistency restoration of the KB is avoided. Subsequently, a set of rationality postulates adapted to argumentation is given, and finally, the proposed model of change is related to the postulates through the corresponding representation theorem. Though we focus on propositional logic, the results can be easily extended to more expressive formalisms such as first-order logic and description logics, to handle evolution of ontologies.
Resumo:
Sewage sludge has been used to fertilize coffee, increasing the risk of metal contamination in this crop. The aim of this work was to study the effects of Cd, Zn and Ni in adult coffee plants growing under field conditions. Seven-year-old coffee plants growing in the field received one of three;loses of Cd, Zn or Ni: 15,45 and 90 g Cd plant(-1); 35, 105 and 210 g Ni plant(-1); and 100, 300 and 600 g Zn plant(-1), with all three metals in the form of sulphate salts. After three months, we noticed good penetration of the three metals into the soil, especially in the first 50 cm, which is the region where most coffee plant roots are concentrated. Leaf concentrations of K, Ca, Mg, S, B, Cu, Fe and Mn were nor affected. N levels did not change with the application of Ni or Zn but were reduced with either 45 or 90 g Cd plant(-1). Foliar P concentrations decreased with the addition of 45 and 90 g Cd plant(-1) and 600 g Zn plant(-1). Zn levels in leaves were not affected by the application of Cd or Ni. The highest concentrations. of Zn were found in branches (30-230 mg kg(-1)), leaves (7-35 mg kg(-1)) and beam (4-6.5 mg kg(-1)); Ni was found in leaves (4-45 mg kg(-1)), branches (3-18 mg kg(-1)) and beans (1-5 mg kg(-1)); and Cd was found in branches (0-6.2 mg kg(-1)) and beans (0-1.5 mg kg(-1)) but was absent in leaves. The mean yield of two harvests was not affected by Ni, but it decreased at the highest dose of Zn (600 g plant(-1)) and the two higher doses of Cd (45 and 90 g plant(-1)). Plants died when treated with the highest dose of Cd and showed symptoms of toxicity with the highest dose of Zn. Nevertheless, based on the amounts of metal used and the results obtained, we conclude that coffee plants are highly tolerant to the three metals tested. Moreover, even at high doses, there was very little transport to the beans, which is the part consumed by humans. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The development of cloud computing services is speeding up the rate in which the organizations outsource their computational services or sell their idle computational resources. Even though migrating to the cloud remains a tempting trend from a financial perspective, there are several other aspects that must be taken into account by companies before they decide to do so. One of the most important aspect refers to security: while some cloud computing security issues are inherited from the solutions adopted to create such services, many new security questions that are particular to these solutions also arise, including those related to how the services are organized and which kind of service/data can be placed in the cloud. Aiming to give a better understanding of this complex scenario, in this article we identify and classify the main security concerns and solutions in cloud computing, and propose a taxonomy of security in cloud computing, giving an overview of the current status of security in this emerging technology.