874 resultados para Cost-to-Serve
Resumo:
Aims: To investigate the species-specific prevalence of vhhP2 among Vibrio harveyi isolates and the applicability of vhhP2 in the specific detection of V. harveyi from crude samples of animal and environmental origins. Methods and Results: A gene (vhhP2) encoding an outer membrane protein of unknown function was identified from a pathogenic V. harveyi isolate. vhhP2 is present in 24 V. harveyi strains isolated from different geographical locations but is absent in 24 strains representing 17 different non-V. harveyi species, including V. parahaemolyticus and V. alginolyticus. A simple polymerase chain reaction method for the identification of V. harveyi was developed based on the conserved sequence of vhhP2. This method was demonstrated to be applicable to the quick detection of V. harveyi from crude animal specimens and environmental samples. The specificity of this method was tested by applying it to the examination of two strains of V. campbellii, which is most closely related to V. harveyi. One of the V. campbellii strains was falsely identified as V. harveyi. Conclusions: vhhP2 is ubiquitously present in the V. harveyi species and is absent in most of the non-V. harveyi species; this feature enables vhhP2 to serve as a genetic marker for the rapid identification of V. harveyi. However, this method can not distinguish some V. campbellii strains from V. harveyi. Significance and Impact of the Study: the significance of our study is the identification of a novel gene of V. harveyi and the development of a simple method for the relatively accurate detection of V. harveyi from animal specimens and environmental samples.
Resumo:
Aims: Genes uniquely expressed in vivo may contribute to the overall pathogenicity of an organism and are likely to serve as potential targets for the development of new vaccine. This study aims to screen the genes expressed in vivo after Vibrio anguillarum infection by in vivo-induced antigen technology (IVIAT). Methods and Results: The convalescent-phase sera were obtained from turbot (Scophthalmus maximus) survived after infection by the virulent V. anguillarum M3. The pooled sera were thoroughly adsorbed with M3 cells and Escherichia coli BL21 (DE3) cells. A genomic expression library of M3 was constructed and screened for the identification of immunogenic proteins by colony immunoblot analysis with the adsorbed sera. After three rounds of screening, 19 putative in vivo-induced (ivi) genes were obtained. These ivi genes were catalogued into four functional groups: regulator/signalling, metabolism, biological process and hypothetical proteins. Three ivi genes were insertion-mutated, and the growth and 50% lethal dose (LD50) of these mutants were evaluated. Conclusions: The identification of ivi genes in V. anguillarum M3 sheds light on understanding the bacterial pathogenesis and provides novel targets for the development of new vaccines and diagnostic reagents. Significance and Impact of the Study: To the best of our knowledge, this is the first report describing in vivo-expressed genes of V. anguillarum using IVIAT. The screened ivi genes in this study could be new virulent factors and targets for the development of vaccine, which may have implications for the development of diagnostic regents.
Resumo:
Aims: Genes uniquely expressed in vivo may contribute to the overall pathogenicity of an organism and are likely to serve as potential targets for the development of new vaccine. This study aims to screen the genes expressed in vivo after Vibrio anguillarum infection by in vivo-induced antigen technology (IVIAT). Methods and Results: The convalescent-phase sera were obtained from turbot (Scophthalmus maximus) survived after infection by the virulent V. anguillarum M3. The pooled sera were thoroughly adsorbed with M3 cells and Escherichia coli BL21 (DE3) cells. A genomic expression library of M3 was constructed and screened for the identification of immunogenic proteins by colony immunoblot analysis with the adsorbed sera. After three rounds of screening, 19 putative in vivo-induced (ivi) genes were obtained. These ivi genes were catalogued into four functional groups: regulator/signalling, metabolism, biological process and hypothetical proteins. Three ivi genes were insertion-mutated, and the growth and 50% lethal dose (LD50) of these mutants were evaluated. Conclusions: The identification of ivi genes in V. anguillarum M3 sheds light on understanding the bacterial pathogenesis and provides novel targets for the development of new vaccines and diagnostic reagents. Significance and Impact of the Study: To the best of our knowledge, this is the first report describing in vivo-expressed genes of V. anguillarum using IVIAT. The screened ivi genes in this study could be new virulent factors and targets for the development of vaccine, which may have implications for the development of diagnostic regents.
Resumo:
An extensive literature survey of over 17 Journals was carried out on Chinese sponges and their natural products in the period from 1980 to 2001. This review is thus intended to provide the first thorough overview of research on marine sponges from China Ocean territories. Information is provided about the rather-limited taxonomic study of Chinese marine sponges, with an analysis on their distribution and diversity. Research findings on the natural products and their bioactivity screening from Chinese sponges are summarized. The weaknesses, gaps and problems in the past R&D program of Chinese sponges are identified, which point to the future opportunities in exploiting these huge untapped sponge resources. The report is expected to serve as an entry point for understanding Chinese sponges and for furthering R&D on their bioactive compounds for new drug development. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Fuel cell vehicles (FCVs) offer the potential of ultra-low emissions combined with high efficiency. Proton exchange membrane (PEM) fuel cells being developed for vehicles require hydrogen as a fuel. Due to the various pathways of hydrogen generation, both onboard and off-board, the question about which fuel option is the most competitive for fuel cell vehicles is of great current interest. In this paper, a life-cycle assessment (LCA) model was made to conduct a comprehensive study of the energy, environmental, and economic (3E) impacts of FCVs from well to wheel (WTW). In view of the special energy structure of China and the timeframe, 10 vehicle/fuel systems are chosen as the study projects. The results show that methanol is the most suitable fuel to serve as the ideal hydrogen source for fuel cell vehicles in the timeframe and geographic regions of this study. On the other hand, gasoline and pure hydrogen can also play a role in short-term and regional applications, especially for local demonstrations of FCV fleets. (c) 2004 Elsevier B.V All rights reserved.
Resumo:
We have investigated growth of silver clusters on three different, i.e. normally cleaved, thermally oxidized and Ar+ ion sputtered highly oriented pyrolytic graphite (HOPG), surfaces. Scanning tunneling microscopy (STM) observations reveal that uniformly sized and spaced Ag clusters only form on the sputtered surface. Ar+ sputtering introduces relatively uniform surface defects compared to other methods. These defects are found to serve as preferential sites for Ag cluster nucleation, which leads to the formation of uniform clusters. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Islamic financing instruments can be categorised into profit and loss/risk sharing and non-participatory instruments. Although profit and loss sharing instruments such as musharakah are widely accepted as the ideal form of Islamic financing, prior studies suggest that alternative instruments such as murabahah are preferred by Islamic banks. Nevertheless, prior studies did not explore factors that influence the use of Islamic financing among non-financial firms. Our study fills this gap and contributes new knowledge in several ways. First, we find no evidence of widespread use of Islamic financing instruments across non-financial firms. This is because the instruments are mostly used by less profitable firms with higher leverage (i.e., risky firms). Second, we find that profit and loss sharing instruments are hardly used, whilst the use of murabahah is dominant. Consistent with the prediction of moral-hazard-risk avoidance theory, further analysis suggests that users with a lower asset base (to serve as collateral) are associated with murabahah financing. Third, we present a critical discourse on the contentious nature of murabahah as practised. The economic significance and ethical issues associated with murabahah as practised should trigger serious efforts to steer Islamic corporate financing towards risk-sharing more than the controversial rent-seeking practice.
Resumo:
Estetyka w archeologii. Antropomorfizacje w pradziejach i starożytności, eds. E. Bugaj, A. P. Kowalski, Poznań: Wydawnictwo Poznańskie.
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Acção Humanitária, Cooperação e Desenvolvimento
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
To serve asynchronous requests using multicast, two categories of techniques, stream merging and periodic broadcasting have been proposed. For sequential streaming access where requests are uninterrupted from the beginning to the end of an object, these techniques are highly scalable: the required server bandwidth for stream merging grows logarithmically as request arrival rate, and the required server bandwidth for periodic broadcasting varies logarithmically as the inverse of start-up delay. However, sequential access is inappropriate to model partial requests and client interactivity observed in various streaming access workloads. This paper analytically and experimentally studies the scalability of multicast delivery under a non-sequential access model where requests start at random points in the object. We show that the required server bandwidth for any protocols providing immediate service grows at least as the square root of request arrival rate, and the required server bandwidth for any protocols providing delayed service grows linearly with the inverse of start-up delay. We also investigate the impact of limited client receiving bandwidth on scalability. We optimize practical protocols which provide immediate service to non-sequential requests. The protocols utilize limited client receiving bandwidth, and they are near-optimal in that the required server bandwidth is very close to its lower bound.
Resumo:
To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given Quality-of-Service (QoS) constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called DCCR (for Delay-Cost-Constrained Routing), is a variant of the k-shortest path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use the method proposed by Blokh and Gutin to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm SSR+DCCR (for Search Space Reduction+DCCR). Through extensive simulations, we confirm that SSR+DCCR performs very well compared to the optimal but very expensive solution.
Resumo:
Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.