899 resultados para binary to multi-class classifiers
Resumo:
This lab follows the lectures 'System Design: http://www.edshare.soton.ac.uk/6280/ http://www.edshare.soton.ac.uk/9653/ and http://www.edshare.soton.ac.uk/9713/ Students use Visual Paradigm for UML to build Class models through project examples: Aircraft Manufacturing Company, Library, Plant Nursery.
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
Resumo:
Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products.
Resumo:
The ultrastructure of a new microsporidian species Microgemmia vivaresi n. sp. causing liver cell xenoma formation in sea scorpions, Taurulus bubalis, is described. Stages of merogony, sporogony, and sporogenesis are mixed in the central cytoplasm of developing xenomas. All stages have unpaired nuclei. Uninucleate and multinucleate meronts lie within vacuoles formed from host endoplasmic reticulum and divide by binary or multiple fission. Sporonts, no longer in vacuoles, deposit plaques of surface coat on the plasma membrane that cause the surface to pucker. Division occurs at the Puckered stage into sporoblast mother cells, on which plaques join up to complete the surface coat. A final binary fission gives rise to sporoblasts. A dense globule, thought to be involved in polar tube synthesis, is gradually dispersed during spore maturation. Spores are broadly ovoid, have a large posterior vacuole, and measure 3.6 mu m x 2.1 pint (fresh). The polar tube has a short wide anterior section that constricts abruptly, then runs posteriad to coil about eight times around the posterior vacuole with granular contents. The polaroplast has up to 40 membranes arranged in pairs mostly attached to the wide region of the polar tube and directed posteriorty around a cytoplasm of a coarsely granular appearance. The species is placed alongside the type species Microgemmia hepaticus Ralphs and Matthews 1986 within the family Tetramicridae, which is transferred from the class Dihaplophasea to the class Haplophasea, as there is no evidence for the occurrence of a diplokaryotic phase.
Resumo:
Background: MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results: A large dataset comprising MHC-peptide structural complexes was created by remodelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion: The QSAR techniques of Genetic Function Approximation (GFA) and Genetic Partial Least Squares (G/PLS) algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.
Resumo:
The UK construction industry is in the process of trying to adopt a new culture based on the large-scale take up of innovative practices. Through the Demonstration Project process many organizations are implementing changed practices and learning from the experiences of others. This is probably the largest experiment in innovation in any industry in recent times. The long-term success will be measured by the effectiveness of embedding the new practices in the organization. As yet there is no recognized approach to measuring the receptivity of the organization to the innovation process as an indication of the likelihood of long-term development. The development of an appropriate approach is described here. Existing approaches to the measurement of the take up of innovation were reviewed and where appropriate used as the base for the development of a questionnaire. The questionnaire could be applicable to multi-organizational construction project situations such that the output could determine an individual organization's innovative practices via an innovation scorecard, a project team's approach or it could be used to survey a wide cross-section of the industry.
Resumo:
A fast Knowledge-based Evolution Strategy, KES, for the multi-objective minimum spanning tree, is presented. The proposed algorithm is validated, for the bi-objective case, with an exhaustive search for small problems (4-10 nodes), and compared with a deterministic algorithm, EPDA and NSGA-II for larger problems (up to 100 nodes) using benchmark hard instances. Experimental results show that KES finds the true Pareto fronts for small instances of the problem and calculates good approximation Pareto sets for larger instances tested. It is shown that the fronts calculated by YES are superior to NSGA-II fronts and almost as good as those established by EPDA. KES is designed to be scalable to multi-objective problems and fast due to its small complexity.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
Changes in the cultures and spaces of death during the Victorian era reveal the shifting conceptualisations and mobilisations of class in this period. Using the example of Brookwood Necropolis, established 1852 in response to the contemporary burial reform debate, the paper explores tensions within the sanitary reform movement, 1853–1903. Whilst reformist ideology grounded the cemetery's practices in a discourse of inclusion, one of the consequences of reform was to reinforce class distinctions. Combined with commercial imperatives and the modern impulse towards separation of living and dead, this aspect of reform enacted a counter-discourse of alienation. The presence of these conflicting strands in the spaces and practices of the Necropolis and their changes during the time period reflect wider urban trends.
Resumo:
How effective are multi-stakeholder scenarios building processes to bring diverse actors together and create a policy-making tool to support sustainable development and promote food security in the developing world under climate change? The effectiveness of a participatory scenario development process highlights the importance of ‘boundary work’ that links actors and organizations involved in generating knowledge on the one hand, and practitioners and policymakers who take actions based on that knowledge on the other. This study reports on the application of criteria for effective boundary work to a multi-stakeholder scenarios process in East Africa that brought together a range of regional agriculture and food systems actors. This analysis has enabled us to evaluate the extent to which these scenarios were seen by the different actors as credible, legitimate and salient, and thus more likely to be useful. The analysis has shown gaps and opportunities for improvement on these criteria, such as the quantification of scenarios, attention to translating and communicating the results through various channels and new approaches to enable a more inclusive and diverse group of participants. We conclude that applying boundary work criteria to multi-stakeholder scenarios processes can do much to increase the likelihood of developing sustainable development and food security policies that are more appropriate.
Resumo:
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.
Resumo:
Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physics
An operationally simple sonogashira reaction for an undergraduate organic chemistry laboratory class
Resumo:
An operationally simple, reliable, and cheap Sonogashira reaction suitable for an undergraduate laboratory class that can be completed within a day-long (8 h) laboratory session has been developed. Cross-coupling is carried out between 2-methyl-3-butyn-2-ol and various aryl iodides using catalytic amounts of bis-(triphenylphosphine)palladium(II) dichloride, with copper(I) iodide as a cocatalyst, in triethylamine at room temperature, so a range of products can be prepared within a single group and results compared. The coupling itself is usually complete within 1.5 h and is easily monitored by TLC, leaving up to 6 h for purification and characterization. Purification is by “mini flash column chromatography” through a plug of silica encased in the barrel of a plastic syringe, so the procedure is amenable to large class sizes.
Resumo:
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High-frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to three or four of the WFD classes with 95% confidence, due to random sampling effects, whereas with weekly sampling this was one or two classes for the same cases. In the most extreme case, the same water body could have been assigned to any of the five WFD quality classes. Weekly sampling considerably reduces the uncertainties compared to monthly sampling. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Low-frequency measurements will generally be unsuitable for assessing standards expressed as high percentiles. Confining sampling to the working week compared to all 7 days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.
Resumo:
In contrast to the many studies on the venoms of scorpions, spiders, snakes and cone snails, tip to now there has been no report of the proteomic analysis of sea anemones venoms. In this work we report for the first time the peptide mass fingerprint and some novel peptides in the neurotoxic fraction (Fr III) of the sea anemone Bunodosoma cangicum venom. Fr III is neurotoxic to crabs and was purified by rp-HPLC in a C-18 column, yielding 41 fractions. By checking their molecular masses by ESI-Q-Tof and MALDI-Tof MS we found 81 components ranging from near 250 amu to approximately 6000 amu. Some of the peptidic molecules were partially sequenced through the automated Edman technique. Three of them are peptides with near 4500 amu belonging to the class of the BcIV, BDS-I, BDS-II, APETx1, APETx2 and Am-II toxins. Another three peptides represent a novel group of toxins (similar to 3200 amu). A further three molecules (similar to similar to 4900 amu) belong to the group of type 1 sodium channel neurotoxins. When assayed over the crab leg nerve compound action potentials, one of the BcIV- and APETx-like peptides exhibits an action similar to the type 1 sodium channel toxins in this preparation, suggesting the same target in this assay. On the other hand one of the novel peptides, with 3176 amu, displayed an action similar to potassium channel blockage in this experiment. In summary, the proteomic analysis and mass fingerprint of fractions from sea anemone venoms through MS are valuable tools, allowing us to rapidly predict the occurrence of different groups of toxins and facilitating the search and characterization of novel molecules without the need of full characterization of individual components by broader assays and bioassay-guided purifications. It also shows that sea anemones employ dozens of components for prey capture and defense. (C) 2008 Elsevier Inc. All rights reserved.