873 resultados para One Over Many Argument
Resumo:
土壤剥蚀是指由侵蚀动力引起的土壤颗粒从土壤母质移动的过程。细沟剥蚀土粒随着细沟股流中含沙量的增加而减少 ,已有的一些侵蚀模型 (如 WEPP)均提到了这一点。用黄土高原一种典型的粉壤土 ,在 5种坡度、3种流量下进行了细沟侵蚀模拟试验。对试验结果进行了回归 ,分析了黄土高原斜坡及陡坡地、细沟股流剥蚀率随含沙量以及沟长变化的函数关系。这对细沟侵蚀动力过程的研究深入 ,以及对侵蚀过程的预测预报提供了有力的参考依据
Resumo:
In this paper, we apply the preconditioned conjugate gradient method to the solution of positive-definite Toeplitz systems, especially we introduce a new kind of co-circulant preconditioners Pn[ca] by the use of embedding method. We have also discussed the properties of these new preconditioners and proved that many of former preconditioners can be considered as some special cases of Pn[co\. Because of the introduction of co-circulant preconditioners pn[a>], we can greatly overcome the singularity caused by circulant preconditioners. We have discussed the oo-circulant series and functions. We compare the ordinary circularity with the co-circularity, showing that the latter one can be considered as the extended form of the former one; correspondingly, many methods and theorems of the ordinary circularity can be extended. Furthermore, we present the co-circulant decompositional method. By the use of this method, we can divide any co-circulant signal into a summation of many sub-signals; especially among those sub-signals, there are many subseries of which their period is just equal to 1, which are actually the frequency elements of the original co-circulant signal. In this way, we can establish the relationship between the signal and its frequency elements, that is, the frequency elements hi the frequency domain are actually signals with the period of 1 in the spatial domain. We have also proved that the co-circulant has already existed in the traditional Fourier theory. By the use of different criteria for constructing preconditioners, we can get many different preconditioned systems. From the preconditioned systems PN[
Resumo:
Wydział Nauk Politycznych i Dziennikarstwa
Resumo:
A probabilistic, nonlinear supervised learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA employs a set of several forward mapping functions that are estimated automatically from training data. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). The SMA can model ambiguous, one-to-many mappings that may yield multiple valid output hypotheses. Once learned, the mapping functions generate a set of output hypotheses for a given input via a statistical inference procedure. The SMA inference procedure incorporates an inverse mapping or feedback function in evaluating the likelihood of each of the hypothesis. Possible feedback functions include computer graphics rendering routines that can generate images for given hypotheses. The SMA employs a variant of the Expectation-Maximization algorithm for simultaneous learning of the specialized domains along with the mapping functions, and approximate strategies for inference. The framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human’s body or hands, given silhouettes from a single image. The accuracy and stability of the SMA are also tested using synthetic images of human bodies and hands, where ground truth is known.
Resumo:
With the increased use of "Virtual Machines" (VMs) as vehicles that isolate applications running on the same host, it is necessary to devise techniques that enable multiple VMs to share underlying resources both fairly and efficiently. To that end, one common approach is to deploy complex resource management techniques in the hosting infrastructure. Alternately, in this paper, we advocate the use of self-adaptation in the VMs themselves based on feedback about resource usage and availability. Consequently, we define a "Friendly" VM (FVM) to be a virtual machine that adjusts its demand for system resources, so that they are both efficiently and fairly allocated to competing FVMs. Such properties are ensured using one of many provably convergent control rules, such as AIMD. By adopting this distributed application-based approach to resource management, it is not necessary to make assumptions about the underlying resources nor about the requirements of FVMs competing for these resources. To demonstrate the elegance and simplicity of our approach, we present a prototype implementation of our FVM framework in User-Mode Linux (UML)-an implementation that consists of less than 500 lines of code changes to UML. We present an analytic, control-theoretic model of FVM adaptation, which establishes convergence and fairness properties. These properties are also backed up with experimental results using our prototype FVM implementation.
Resumo:
Consider a network of processors (sites) in which each site x has a finite set N(x) of neighbors. There is a transition function f that for each site x computes the next state ξ(x) from the states in N(x). But these transitions (updates) are applied in arbitrary order, one or many at a time. If the state of site x at time t is η(x; t) then let us define the sequence ζ(x; 0); ζ(x; 1), ... by taking the sequence η(x; 0),η(x; 1), ... , and deleting each repetition, i.e. each element equal to the preceding one. The function f is said to have invariant histories if the sequence ζ(x; i), (while it lasts, in case it is finite) depends only on the initial configuration, not on the order of updates. This paper shows that though the invariant history property is typically undecidable, there is a useful simple sufficient condition, called commutativity: For any configuration, for any pair x; y of neighbors, if the updating would change both ξ(x) and ξ(y) then the result of updating first x and then y is the same as the result of doing this in the reverse order. This fact is derivable from known results on the confluence of term-rewriting systems but the self-contained proof given here may be justifiable.
Resumo:
This paper proposes a novel protocol which uses the Internet Domain Name System (DNS) to partition Web clients into disjoint sets, each of which is associated with a single DNS server. We define an L-DNS cluster to be a grouping of Web Clients that use the same Local DNS server to resolve Internet host names. We identify such clusters in real-time using data obtained from a Web Server in conjunction with that server's Authoritative DNS―both instrumented with an implementation of our clustering algorithm. Using these clusters, we perform measurements from four distinct Internet locations. Our results show that L-DNS clustering enables a better estimation of proximity of a Web Client to a Web Server than previously proposed techniques. Thus, in a Content Distribution Network, a DNS-based scheme that redirects a request from a web client to one of many servers based on the client's name server coordinates (e.g., hops/latency/loss-rates between the client and servers) would perform better with our algorithm.
Resumo:
In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and MaxSum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes.
Resumo:
Mapping novel terrain from sparse, complex data often requires the resolution of conflicting information from sensors working at different times, locations, and scales, and from experts with different goals and situations. Information fusion methods help resolve inconsistencies in order to distinguish correct from incorrect answers, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods developed here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an objects class is car, vehicle, or man-made. Underlying relationships among objects are assumed to be unknown to the automated system of the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchial knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples.
Resumo:
Classifying novel terrain or objects front sparse, complex data may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods described here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among objects are assumed to be unknown to the automated system or the human user. The ARTMAP information fusion system used distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchical knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships.
Resumo:
Classifying novel terrain or objects from sparse, complex data may require the resolution of conflicting information from sensors woring at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when eveidence variously suggests that and object's class is car, truck, or airplane. The methods described her address a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the autonomated system or the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierachical knowlege structures. The fusion system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples, but is not limited to image domain.
Resumo:
Cancer is a global problem. Despite the significant advances made in recent years, a definitively effective therapeutic has yet to be developed. Oncolytic virology has fallen back into favour for the treatment of cancer with several viruses and viral vectors currently under investigation including vesicular stomatitis virus (VSV), adenovirus vectors and herpes simplex virus (HSV) vectors. Reovirus has an advantage over many viral vectors in that its wild-type form is non-pathogenic and will selectively infect transformed cells, particularly those mutated in the Ras pathway. These advantages make Reovirus an ideal candidate as a safe and non-toxic therapeutic. The aim of the first part of this study was to determine the effect, if any, of Reovirus on cell lines derived from cancers of the gastrointestinal tract. These cancers, particularly those of the oesophagus and stomach, have extremely poor prognoses and little improvement has been seen in survival of these patients in recent years. Reovirus as a single therapy showed promising results in cell lines of oesophageal, gastric and colorectal origin. Further study of partially resistant cell lines using a combination of Reovirus and conventional therapies, either chemotherapy or radiation, showed that a multi-modal approach to therapy is possible with Reovirus and no antagonism between Reovirus and other treatments was observed. The second part of this study focused on investigating a novel use of Reovirus in an in vivo setting. Cancer vaccination or the use of vaccines in cancer therapy is gaining momentum and success has been seen both in a prophylactic approach and a therapeutic approach. A cell-based Reovirus vaccine was used in both these approaches with encouraging success. When used as a prophylactic vaccine tumour development was subsequently inhibited even upon exposure to a tumorigenic dose of cells. The use of the cell-based Reovirus vaccine as a therapeutic for established tumours showed significant delay in tumour growth and a prolongation of survival in all models. This study has proven that Reovirus is an effective therapeutic in a range of cancers and the successful use of a cell-based Reovirus vaccine leads the way for new advancements in cancer immunotherapy.
Resumo:
Oxidative stress has become widely viewed as an underlying condition in a number of diseases, such as ischemia-reperfusion disorders, central nervous system disorders, cardiovascular conditions, cancer, and diabetes. Thus, natural and synthetic antioxidants have been actively sought. Superoxide dismutase is a first line of defense against oxidative stress under physiological and pathological conditions. Therefore, the development of therapeutics aimed at mimicking superoxide dismutase was a natural maneuver. Metalloporphyrins, as well as Mn cyclic polyamines, Mn salen derivatives and nitroxides were all originally developed as SOD mimics. The same thermodynamic and electrostatic properties that make them potent SOD mimics may allow them to reduce other reactive species such as peroxynitrite, peroxynitrite-derived CO(3)(*-), peroxyl radical, and less efficiently H(2)O(2). By doing so SOD mimics can decrease both primary and secondary oxidative events, the latter arising from the inhibition of cellular transcriptional activity. To better judge the therapeutic potential and the advantage of one over the other type of compound, comparative studies of different classes of drugs in the same cellular and/or animal models are needed. We here provide a comprehensive overview of the chemical properties and some in vivo effects observed with various classes of compounds with a special emphasis on porphyrin-based compounds.
Resumo:
Scholarly publishing, and scholarly communication more generally, are based on patterns established over many decades and even centuries. Some of these patterns are clearly valuable and intimately related to core values of the academy, but others were based on the exigencies of the past, and new opportunities have brought into question whether it makes sense to persist in supporting old models. New technologies and new publishing models raise the question of how we should fund and operate scholarly publishing and scholarly communication in the future, moving away from a scarcity model based on the exchange of physical goods that restricts access to scholarly literature unless a market-based exchange takes place. This essay describes emerging models that attempt to shift scholarly communication to a more open-access and mission-based approach and that try to retain control of scholarship by academics and the institutions and scholarly societies that support them. It explores changing practices for funding scholarly journals and changing services provided by academic libraries, changes instituted with the end goal of providing more access to more readers, stimulating new scholarship, and removing inefficiencies from a system ready for change. © 2014 by the American Anthropological Association.
Resumo:
BACKGROUND: In the domain of academia, the scholarship of research may include, but not limited to, peer-reviewed publications, presentations, or grant submissions. Programmatic research productivity is one of many measures of academic program reputation and ranking. Another measure or tool for quantifying learning success among physical therapists education programs in the USA is 100 % three year pass rates of graduates on the standardized National Physical Therapy Examination (NPTE). In this study, we endeavored to determine if there was an association between research productivity through artifacts and 100 % three year pass rates on the NPTE. METHODS: This observational study involved using pre-approved database exploration representing all accredited programs in the USA who graduated physical therapists during 2009, 2010 and 2011. Descriptive variables captured included raw research productivity artifacts such as peer reviewed publications and books, number of professional presentations, number of scholarly submissions, total grant dollars, and numbers of grants submitted. Descriptive statistics and comparisons (using chi square and t-tests) among program characteristics and research artifacts were calculated. Univariate logistic regression analyses, with appropriate control variables were used to determine associations between research artifacts and 100 % pass rates. RESULTS: Number of scholarly artifacts submitted, faculty with grants, and grant proposals submitted were significantly higher in programs with 100 % three year pass rates. However, after controlling for program characteristics such as grade point average, diversity percentage of cohort, public/private institution, and number of faculty, there were no significant associations between scholarly artifacts and 100 % three year pass rates. CONCLUSIONS: Factors outside of research artifacts are likely better predictors for passing the NPTE.