803 resultados para preference-based measures
Resumo:
Vaccines against Neisseria meningitidis group C are based on its alpha-2,9-linked polysialic acid capsular polysaccharide. This polysialic acid expressed on the surface of N. meningitidis and in the absence of specific antibody serves to evade host defense mechanisms. The polysialyltransferase (PST) that forms the group C polysialic acid (NmC PST) is located in the cytoplasmic membrane. Until recently, detailed characterization of bacterial polysialyltransferases has been hampered by a lack of availability of soluble enzyme preparations. We have constructed chimeras of the group C polysialyltransferase that catalyzes the formation alpha-2,9-polysialic acid as a soluble enzyme. We used site-directed mutagenesis to determine the region of the enzyme necessary for synthesis of the alpha-2,9 linkage. A chimera of NmB and NmC PSTs containing only amino acids 1 to 107 of the NmB polysialyltransferase catalyzed the synthesis of alpha-2,8-polysialic acid. The NmC polysialyltransferase requires an exogenous acceptor for catalytic activity. While it requires a minimum of a disialylated oligosaccharide to catalyze transfer, it can form high-molecular-weight alpha-2,9-polysialic acid in a nonprocessive fashion when initiated with an alpha-2,8-polysialic acid acceptor. De novo synthesis in vivo requires an endogenous acceptor. We attempted to reconstitute de novo activity of the soluble group C polysialyltransferase with membrane components. We found that an acapsular mutant with a defect in the polysialyltransferase produces outer membrane vesicles containing an acceptor for the alpha-2,9-polysialyltransferase. This acceptor is an amphipathic molecule and can be elongated to produce polysialic acid that is reactive with group C-specific antibody.
Resumo:
A new class of nets, called S-nets, is introduced for the performance analysis of scheduling algorithms used in real-time systems Deterministic timed Petri nets do not adequately model the scheduling of resources encountered in real-time systems, and need to be augmented with resource places and signal places, and a scheduler block, to facilitate the modeling of scheduling algorithms. The tokens are colored, and the transition firing rules are suitably modified. Further, the concept of transition folding is used, to get intuitively simple models of multiframe real-time systems. Two generic performance measures, called �load index� and �balance index,� which characterize the resource utilization and the uniformity of workload distribution, respectively, are defined. The utility of S-nets for evaluating heuristic-based scheduling schemes is illustrated by considering three heuristics for real-time scheduling. S-nets are useful in tuning the hardware configuration and the underlying scheduling policy, so that the system utilization is maximized, and the workload distribution among the computing resources is balanced.
Resumo:
A new approach based on occupation measures is introduced for studying stochastic differential games. For two-person zero-sum games, the existence of values and optimal strategies for both players is established for various payoff criteria. ForN-person games, the existence of equilibria in Markov strategies is established for various cases.
Resumo:
We are concerned with the situation in which a wireless sensor network is deployed in a region, for the purpose of detecting an event occurring at a random time and at a random location. The sensor nodes periodically sample their environment (e.g., for acoustic energy),process the observations (in our case, using a CUSUM-based algorithm) and send a local decision (which is binary in nature) to the fusion centre. The fusion centre collects these local decisions and uses a fusion rule to process the sensors’ local decisions and infer the state of nature, i.e., if an event has occurred or not. Our main contribution is in analyzing two local detection rules in combination with a simple fusion rule. The local detection algorithms are based on the nonparametric CUSUMprocedure from sequential statistics. We also propose two ways to operate the local detectors after an alarm. These alternatives when combined in various ways yield several approaches. Our contribution is to provide analytical techniques to calculate false alarm measures, by the use of which the local detector thresholds can be set. Simulation results are provided to evaluate the accuracy of our analysis. As an illustration we provide a design example. We also use simulations to compare the detection delays incurred in these algorithms.
Resumo:
Advertisements(Ads) are the main revenue earner for Television (TV) broadcasters. As TV reaches a large audience, it acts as the best media for advertisements of products and services. With the emergence of digital TV, it is important for the broadcasters to provide an intelligent service according to the various dimensions like program features, ad features, viewers’ interest and sponsors’ preference. We present an automatic ad recommendation algorithm that selects a set of ads by considering these dimensions and semantically match them with programs. Features of the ad video are captured interms of annotations and they are grouped into number of predefined semantic categories by using a categorization technique. Fuzzy categorical data clustering technique is applied on categorized data for selecting better suited ads for a particular program. Since the same ad can be recommended for more than one program depending upon multiple parameters, fuzzy clustering acts as the best suited method for ad recommendation. The relative fuzzy score called “degree of membership” calculated for each ad indicates the membership of a particular ad to different program clusters. Subjective evaluation of the algorithm is done by 10 different people and rated with a high success score.
Resumo:
Crystal structures of polymorphs and solvatomorphs of the potential anxiolytic drug fenobam exhibit an exclusive preference for one of the two possible tautomeric structures. A novel methodology based on nonlinear optical response has been successfully employed to detect the presence of a polymorphic impurity in a mixture of polymorphs.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.
Resumo:
Two multicriterion decision-making methods, namely `compromise programming' and the `technique for order preference by similarity to an ideal solution' are employed to prioritise 22 micro-catchments (A1 to A22) of Kherthal catchment, Rajasthan, India and comparative analysis is performed using the compound parameter approach. Seven criteria - drainage density, bifurcation ratio, stream frequency, form factor, elongation ratio, circulatory ratio and texture ratio - are chosen for the evaluation. The entropy method is employed to estimate weights or relative importance of the criterion which ultimately affects the ranking pattern or prioritisation of micro-catchments. Spearman rank correlation coefficients are estimated to measure the extent to which the ranks obtained are correlated. Based on the average ranking approach supported by sensitivity analysis, micro-catchments A6, A10, A3 are preferred (owing to their low ranking) for further improvements with suitable conservation and management practices, and other micro-catchments can be processed accordingly at a later phase on a priority basis. It is concluded that the present approach can be explored for other similar situations with appropriate modifications.
Resumo:
In social choice theory, preference aggregation refers to computing an aggregate preference over a set of alternatives given individual preferences of all the agents. In real-world scenarios, it may not be feasible to gather preferences from all the agents. Moreover, determining the aggregate preference is computationally intensive. In this paper, we show that the aggregate preference of the agents in a social network can be computed efficiently and with sufficient accuracy using preferences elicited from a small subset of critical nodes in the network. Our methodology uses a model developed based on real-world data obtained using a survey on human subjects, and exploits network structure and homophily of relationships. Our approach guarantees good performance for aggregation rules that satisfy a property which we call expected weak insensitivity. We demonstrate empirically that many practically relevant aggregation rules satisfy this property. We also show that two natural objective functions in this context satisfy certain properties, which makes our methodology attractive for scalable preference aggregation over large scale social networks. We conclude that our approach is superior to random polling while aggregating preferences related to individualistic metrics, whereas random polling is acceptable in the case of social metrics.
Resumo:
A supply chain ecosystem consists of the elements of the supply chain and the entities that influence the goods, information and financial flows through the supply chain. These influences come through government regulations, human, financial and natural resources, logistics infrastructure and management, etc., and thus affect the supply chain performance. Similarly, all the ecosystem elements also contribute to the risk. The aim of this paper is to identify both performances-based and risk-based decision criteria, which are important and critical to the supply chain. A two step approach using fuzzy AHP and fuzzy technique for order of preference by similarity to ideal solution has been proposed for multi-criteria decision-making and illustrated using a numerical example. The first step does the selection without considering risks and then in the next step suppliers are ranked according to their risk profiles. Later, the two ranks are consolidated into one. In subsequent section, the method is also extended for multi-tier supplier selection. In short, we are presenting a method for the design of a resilient supply chain, in this paper.
Resumo:
Realization of cloud computing has been possible due to availability of virtualization technologies on commodity platforms. Measuring resource usage on the virtualized servers is difficult because of the fact that the performance counters used for resource accounting are not virtualized. Hence, many of the prevalent virtualization technologies like Xen, VMware, KVM etc., use host specific CPU usage monitoring, which is coarse grained. In this paper, we present a performance monitoring tool for KVM based virtualized machines, which measures the CPU overhead incurred by the hypervisor on behalf of the virtual machine along-with the CPU usage of virtual machine itself. This fine-grained resource usage information, provided by the above tool, can be used for diverse situations like resource provisioning to support performance associated QoS requirements, identification of bottlenecks during VM placements, resource profiling of applications in cloud environments, etc. We demonstrate a use case of this tool by measuring the performance of web-servers hosted on a KVM based virtualized server.
Resumo:
Research has been undertaken to ascertain the predictability of non-stationary time series using wavelet and Empirical Mode Decomposition (EMD) based time series models. Methods have been developed in the past to decompose a time series into components. Forecasting of these components combined with random component could yield predictions. Using this ideology, wavelet and EMD analyses have been incorporated separately which decomposes a time series into independent orthogonal components with both time and frequency localizations. The component series are fit with specific auto-regressive models to obtain forecasts which are later combined to obtain the actual predictions. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability is checked for six and twelve months ahead forecasts across both the methodologies. Based on performance measures, it is observed that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm can be used to model events such as droughts with reasonable accuracy. Also, some modifications that can be made in the model have been discussed that could extend the scope of applicability to other areas in the field of hydrology. (C) 2013 Elesvier B.V. All rights reserved.
Resumo:
Feeding 9-10billion people by 2050 and preventing dangerous climate change are two of the greatest challenges facing humanity. Both challenges must be met while reducing the impact of land management on ecosystem services that deliver vital goods and services, and support human health and well-being. Few studies to date have considered the interactions between these challenges. In this study we briefly outline the challenges, review the supply- and demand-side climate mitigation potential available in the Agriculture, Forestry and Other Land Use AFOLU sector and options for delivering food security. We briefly outline some of the synergies and trade-offs afforded by mitigation practices, before presenting an assessment of the mitigation potential possible in the AFOLU sector under possible future scenarios in which demand-side measures codeliver to aid food security. We conclude that while supply-side mitigation measures, such as changes in land management, might either enhance or negatively impact food security, demand-side mitigation measures, such as reduced waste or demand for livestock products, should benefit both food security and greenhouse gas (GHG) mitigation. Demand-side measures offer a greater potential (1.5-15.6Gt CO2-eq. yr(-1)) in meeting both challenges than do supply-side measures (1.5-4.3Gt CO2-eq. yr(-1) at carbon prices between 20 and 100US$ tCO(2)-eq. yr(-1)), but given the enormity of challenges, all options need to be considered. Supply-side measures should be implemented immediately, focussing on those that allow the production of more agricultural product per unit of input. For demand-side measures, given the difficulties in their implementation and lag in their effectiveness, policy should be introduced quickly, and should aim to codeliver to other policy agenda, such as improving environmental quality or improving dietary health. These problems facing humanity in the 21st Century are extremely challenging, and policy that addresses multiple objectives is required now more than ever.
Resumo:
The increasing number of available protein structures requires efficient tools for multiple structure comparison. Indeed, multiple structural alignments are essential for the analysis of function, evolution and architecture of protein structures. For this purpose, we proposed a new web server called multiple Protein Block Alignment (mulPBA). This server implements a method based on a structural alphabet to describe the backbone conformation of a protein chain in terms of dihedral angles. This sequence-like' representation enables the use of powerful sequence alignment methods for primary structure comparison, followed by an iterative refinement of the structural superposition. This approach yields alignments superior to most of the rigid-body alignment methods and highly comparable with the flexible structure comparison approaches. We implement this method in a web server designed to do multiple structure superimpositions from a set of structures given by the user. Outputs are given as both sequence alignment and superposed 3D structures visualized directly by static images generated by PyMol or through a Jmol applet allowing dynamic interaction. Multiple global quality measures are given. Relatedness between structures is indicated by a distance dendogram. Superimposed structures in PDB format can be also downloaded, and the results are quickly obtained. mulPBA server can be accessed at www.dsimb.inserm.fr/dsimb_tools/mulpba/.