864 resultados para Support systems
Resumo:
In some countries, photovoltaic (PV) technology is at a stage of development at which it can compete with conventional electricity sources in terms of electricity generation costs, i.e., grid parity. A case in point is Germany, where the PV market has reached a mature stage, the policy support has scaled down and the diffusion rate of PV systems has declined. This development raises a fundamental question: what are the motives to adopt PV systems at grid parity? The point of departure for the relevant literature has been on the impact of policy support, adopters and, recently, local solar companies. However, less attention has been paid to the motivators for adoption at grid parity. This paper presents an in-depth analysis of the diffusion of PV systems, explaining the impact of policy measures, adopters and system suppliers. Anchored in an extensive and exploratory case study in Germany, we provide a context-specific explanation to the motivations to adopt PV systems at grid parity.
Resumo:
Postprint
Resumo:
Funding A Health Systems Research Initiative Development Grant from the UK Department for International Development (DFID), Economic and Social Research Council (ESRC), Medical Research Council (MRC (and the Wellcome Trust (MR/N005597/1) funds the research presented in this paper. Support for the Agincourt HDSS including verbal autopsies was provided by The Wellcome Trust, UK (grants 058893/Z/99/A; 069683/Z/02/Z; 085477/Z/08/Z; 085477/B/08/Z), and the University of the Witwatersrand and Medical Research Council, South Africa.
Resumo:
Inteins are protein-splicing elements, most of which contain conserved sequence blocks that define a family of homing endonucleases. Like group I introns that encode such endonucleases, inteins are mobile genetic elements. Recent crystallography and computer modeling studies suggest that inteins consist of two structural domains that correspond to the endonuclease and the protein-splicing elements. To determine whether the bipartite structure of inteins is mirrored by the functional independence of the protein-splicing domain, the entire endonuclease component was deleted from the Mycobacterium tuberculosis recA intein. Guided by computer modeling studies, and taking advantage of genetic systems designed to monitor intein function, the 440-aa Mtu recA intein was reduced to a functional mini-intein of 137 aa. The accuracy of splicing of several mini-inteins was verified. This work not only substantiates structure predictions for intein function but also supports the hypothesis that, like group I introns, mobile inteins arose by an endonuclease gene invading a sequence encoding a small, functional splicing element.
Resumo:
To “control” a system is to make it behave (hopefully) according to our “wishes,” in a way compatible with safety and ethics, at the least possible cost. The systems considered here are distributed—i.e., governed (modeled) by partial differential equations (PDEs) of evolution. Our “wish” is to drive the system in a given time, by an adequate choice of the controls, from a given initial state to a final given state, which is the target. If this can be achieved (respectively, if we can reach any “neighborhood” of the target) the system, with the controls at our disposal, is exactly (respectively, approximately) controllable. A very general (and fuzzy) idea is that the more a system is “unstable” (chaotic, turbulent) the “simplest,” or the “cheapest,” it is to achieve exact or approximate controllability. When the PDEs are the Navier–Stokes equations, it leads to conjectures, which are presented and explained. Recent results, reported in this expository paper, essentially prove the conjectures in two space dimensions. In three space dimensions, a large number of new questions arise, some new results support (without proving) the conjectures, such as generic controllability and cases of decrease of cost of control when the instability increases. Short comments are made on models arising in climatology, thermoelasticity, non-Newtonian fluids, and molecular chemistry. The Introduction of the paper and the first part of all sections are not technical. Many open questions are mentioned in the text.
Resumo:
There is extensive evidence that the amygdala is involved in affectively influenced memory. The central hypothesis guiding the research reviewed in this paper is that emotional arousal activates the amygdala and that such activation results in the modulation of memory storage occurring in other brain regions. Several lines of evidence support this view. First, the effects of stress-related hormones (epinephrine and glucocorticoids) are mediated by influences involving the amygdala. In rats, lesions of the amygdala and the stria terminalis block the effects of posttraining administration of epinephrine and glucocorticoids on memory. Furthermore, memory is enhanced by posttraining intra-amygdala infusions of drugs that activate β-adrenergic and glucocorticoid receptors. Additionally, infusion of β-adrenergic blockers into the amygdala blocks the memory-modulating effects of epinephrine and glucocorticoids, as well as those of drugs affecting opiate and GABAergic systems. Second, an intact amygdala is not required for expression of retention. Inactivation of the amygdala prior to retention testing (by posttraining lesions or drug infusions) does not block retention performance. Third, findings of studies using human subjects are consistent with those of animal experiments. β-Blockers and amygdala lesions attenuate the effects of emotional arousal on memory. Additionally, 3-week recall of emotional material is highly correlated with positron-emission tomography activation (cerebral glucose metabolism) of the right amygdala during encoding. These findings provide strong evidence supporting the hypothesis that the amygdala is involved in modulating long-term memory storage.
Resumo:
The experiments reported here were designed to test the hypothesis that the two-electron quinone reductase DT-diaphorase [NAD(P)H:(quinone-acceptor) oxidoreductase, EC 1.6.99.2] functions to maintain membrane-bound coenzyme Q (CoQ) in its reduced antioxidant state, thereby providing protection from free radical damage. DT-diaphorase was isolated and purified from rat liver cytosol, and its ability to reduce several CoQ homologs incorporated into large unilamellar vesicles was demonstrated. Addition of NADH and DT-diaphorase to either large unilamellar or multilamellar vesicles containing homologs of CoQ, including CoQ9 and CoQ10, resulted in the essentially complete reduction of the CoQ. The ability of DT-diaphorase to maintain the reduced state of CoQ and protect membrane components from free radical damage as lipid peroxidation was tested by incorporating either reduced CoQ9 or CoQ10 and the lipophylic azoinitiator 2,2'-azobis(2,4-dimethylvaleronitrile) into multilamellar vesicles in the presence of NADH and DT-diaphorase. The presence of DT-diaphorase prevented the oxidation of reduced CoQ and inhibited lipid peroxidation. The interaction between DT-diaphorase and CoQ was also demonstrated in an isolated rat liver hepatocyte system. Incubation with adriamycin resulted in mitochondrial membrane damage as measured by membrane potential and the release of hydrogen peroxide. Incorporation of CoQ10 provided protection from adriamycin-induced mitochondrial membrane damage. The incorporation of dicoumarol, a potent inhibitor of DT-diaphorase, interfered with the protection provided by CoQ. The results of these experiments provide support for the hypothesis that DT-diaphorase functions as an antioxidant in both artificial membrane and natural membrane systems by acting as a two-electron CoQ reductase that forms and maintains the antioxidant form of CoQ. The suggestion is offered that DT-diaphorase was selected during evolution to perform this role and that its conversion of xenobiotics and other synthetic molecules is secondary and coincidental.
Resumo:
The deployment of systems for human-to-machine communication by voice requires overcoming a variety of obstacles that affect the speech-processing technologies. Problems encountered in the field might include variation in speaking style, acoustic noise, ambiguity of language, or confusion on the part of the speaker. The diversity of these practical problems encountered in the "real world" leads to the perceived gap between laboratory and "real-world" performance. To answer the question "What applications can speech technology support today?" the concept of the "degree of difficulty" of an application is introduced. The degree of difficulty depends not only on the demands placed on the speech recognition and speech synthesis technologies but also on the expectations of the user of the system. Experience has shown that deployment of effective speech communication systems requires an iterative process. This paper discusses general deployment principles, which are illustrated by several examples of human-machine communication systems.
Resumo:
The very purpose of a recruiting software program is to help the management of organizations, primarily the HR department to keep track of the job applications. An applicant tracking system can reduce an organization's overall recruitment cost, increase productivity, and raise the level of satisfaction due to faster and better completion of transactions and services. This project analyzes four software providers to discover an applicant tracking system which best suits an organization's recruiting needs. The capstone also highlights that great success an organization can be achieved by significantly improving the delivery of its recruiting services to employees, managers and applicants. The adoption of a well managed applicant tracking system can support this goal.
Resumo:
Phase equilibrium data regression is an unavoidable task necessary to obtain the appropriate values for any model to be used in separation equipment design for chemical process simulation and optimization. The accuracy of this process depends on different factors such as the experimental data quality, the selected model and the calculation algorithm. The present paper summarizes the results and conclusions achieved in our research on the capabilities and limitations of the existing GE models and about strategies that can be included in the correlation algorithms to improve the convergence and avoid inconsistencies. The NRTL model has been selected as a representative local composition model. New capabilities of this model, but also several relevant limitations, have been identified and some examples of the application of a modified NRTL equation have been discussed. Furthermore, a regression algorithm has been developed that allows for the advisable simultaneous regression of all the condensed phase equilibrium regions that are present in ternary systems at constant T and P. It includes specific strategies designed to avoid some of the pitfalls frequently found in commercial regression tools for phase equilibrium calculations. Most of the proposed strategies are based on the geometrical interpretation of the lowest common tangent plane equilibrium criterion, which allows an unambiguous comprehension of the behavior of the mixtures. The paper aims to show all the work as a whole in order to reveal the necessary efforts that must be devoted to overcome the difficulties that still exist in the phase equilibrium data regression problem.
Resumo:
In this work the usefulness of qualitatively studying and drawing three-dimensional temperature–composition diagrams for ternary systems is pointed out to understand and interpret the particular behavior of the liquid–vapour equilibrium of non-ideal ternary systems. Several examples have been used in order to highlight the interest and the possibilities of this tool, which should be an interesting support not only for lecturers, but also for researchers interested in experimental equilibrium data determination.
Resumo:
Society today is completely dependent on computer networks, the Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Subconsciously, we rely increasingly on network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect, and enhance the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we do not consider network management systems during the development stage of distributed systems, then there could be serious consequences or even total failures in the development of the distributed system. It is necessary, therefore, to consider the management of the systems within the design of the distributed systems and to systematise their design to minimise the impact of network management in distributed systems projects. In this paper, we present a framework that allows the design of network management systems systematically. To accomplish this goal, formal modelling tools are used for modelling different views sequentially proposed of the same problem. These views cover all the aspects that are involved in the system; based on process definitions for identifying responsible and defining the involved agents to propose the deployment in a distributed architecture that is both feasible and appropriate.
Resumo:
This introduction provides an overview of the state-of-the-art technology in Applications of Natural Language to Information Systems. Specifically, we analyze the need for such technologies to successfully address the new challenges of modern information systems, in which the exploitation of the Web as a main data source on business systems becomes a key requirement. It will also discuss the reasons why Human Language Technologies themselves have shifted their focus onto new areas of interest very directly linked to the development of technology for the treatment and understanding of Web 2.0. These new technologies are expected to be future interfaces for the new information systems to come. Moreover, we will review current topics of interest to this research community, and will present the selection of manuscripts that have been chosen by the program committee of the NLDB 2011 conference as representative cornerstone research works, especially highlighting their contribution to the advancement of such technologies.
Resumo:
The requirements for edge protection systems on most sloped work surfaces (class C, according to EN 13374-2013 code) in construction works are studied in this paper. Maximum deceleration suffered by a falling body and maximum deflection of the protection system were analyzed through finite-element models and confirmed through full-scale experiments. The aim of this work is to determine which value for deflection system entails a safe deceleration for the human body. This value is compared with the requirements given by the current version of EN 13374-2013. An additional series of experiments were done to determine the acceleration linked to minimum deflection required by code (200 mm) during the retention process. According to the obtained results, a modification of this value is recommended. Additionally, a simple design formula for this falling protection system is proposed as a quick tool for the initial steps of design.
Resumo:
We address the optimization of discrete-continuous dynamic optimization problems using a disjunctive multistage modeling framework, with implicit discontinuities, which increases the problem complexity since the number of continuous phases and discrete events is not known a-priori. After setting a fixed alternative sequence of modes, we convert the infinite-dimensional continuous mixed-logic dynamic (MLDO) problem into a finite dimensional discretized GDP problem by orthogonal collocation on finite elements. We use the Logic-based Outer Approximation algorithm to fully exploit the structure of the GDP representation of the problem. This modelling framework is illustrated with an optimization problem with implicit discontinuities (diver problem).