937 resultados para Example Based Learnin
Resumo:
Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.
Resumo:
Access control (AC) limits access to the resources of a system only to authorized entities. Given that information systems today are increasingly interconnected, AC is extremely important. The implementation of an AC service is a complicated task. Yet the requirements to an AC service vary a lot. Accordingly, the design of an AC service should be flexible and extensible in order to save development effort and time. Unfortunately, with conventional object-oriented techniques, when an extension has not been anticipated at the design time, the modification incurred by the extension is often invasive. Invasive changes destroy design modularity, further deteriorate design extensibility, and even worse, they reduce product reliability. ^ A concern is crosscutting if it spans multiple object-oriented classes. It was identified that invasive changes were due to the crosscutting nature of most unplanned extensions. To overcome this problem, an aspect-oriented design approach for AC services was proposed, as aspect-oriented techniques could effectively encapsulate crosscutting concerns. The proposed approach was applied to develop an AC framework that supported role-based access control model. In the framework, the core role-based access control mechanism is given in an object-oriented design, while each extension is captured as an aspect. The resulting framework is well-modularized, flexible, and most importantly, supports noninvasive adaptation. ^ In addition, a process to formalize the aspect-oriented design was described. The purpose is to provide high assurance for AC services. Object-Z was used to specify the static structure and Predicate/Transition net was used to model the dynamic behavior. Object-Z was extended to facilitate specification in an aspect-oriented style. The process of formal modeling helps designers to enhance their understanding of the design, hence to detect problems. Furthermore, the specification can be mathematically verified. This provides confidence that the design is correct. It was illustrated through an example that the model was ready for formal analysis. ^
Resumo:
This paper proposes conceptual designs of multi-degree(s) of freedom (DOF) compliant parallel manipulators (CPMs) including 3-DOF translational CPMs and 6-DOF CPMs using a building block based pseudo-rigid-body-model (PRBM) approach. The proposed multi-DOF CPMs are composed of wire-beam based compliant mechanisms (WBBCMs) as distributed-compliance compliant building blocks (CBBs). Firstly, a comprehensive literature review for the design approaches of compliant mechanisms is conducted, and a building block based PRBM is then presented, which replaces the traditional kinematic sub-chain with an appropriate multi-DOF CBB. In order to obtain the decoupled 3-DOF translational CPMs (XYZ CPMs), two classes of kinematically decoupled 3-PPPR (P: prismatic joint, R: revolute joint) translational parallel mechanisms (TPMs) and 3-PPPRR TPMs are identified based on the type synthesis of rigid-body parallel mechanisms, and WBBCMs as the associated CBBs are further designed. Via replacing the traditional actuated P joint and the traditional passive PPR/PPRR sub-chain in each leg of the 3-DOF TPM with the counterpart CBBs (i.e. WBBCMs), a number of decoupled XYZ CPMs are obtained by appropriate arrangements. In order to obtain the decoupled 6-DOF CPMs, an orthogonally-arranged decoupled 6-PSS (S: spherical joint) parallel mechanism is first identified, and then two example 6-DOF CPMs are proposed by the building block based PRBM method. It is shown that, among these designs, two types of monolithic XYZ CPM designs with extended life have been presented.
Resumo:
Government communication is an important management tool during a public health crisis, but understanding its impact is difficult. Strategies may be adjusted in reaction to developments on the ground and it is challenging to evaluate the impact of communication separately from other crisis management activities. Agent-based modeling is a well-established research tool in social science to respond to similar challenges. However, there have been few such models in public health. We use the example of the TELL ME agent-based model to consider ways in which a non-predictive policy model can assist policy makers. This model concerns individuals’ protective behaviors in response to an epidemic, and the communication that influences such behavior. Drawing on findings from stakeholder workshops and the results of the model itself, we suggest such a model can be useful: (i) as a teaching tool, (ii) to test theory, and (iii) to inform data collection. We also plot a path for development of similar models that could assist with communication planning for epidemics.
Resumo:
Progress in cognitive neuroscience relies on methodological developments to increase the specificity of knowledge obtained regarding brain function. For example, in functional neuroimaging the current trend is to study the type of information carried by brain regions rather than simply compare activation levels induced by task manipulations. In this context noninvasive transcranial brain stimulation (NTBS) in the study of cognitive functions may appear coarse and old fashioned in its conventional uses. However, in their multitude of parameters, and by coupling them with behavioral manipulations, NTBS protocols can reach the specificity of imaging techniques. Here we review the different paradigms that have aimed to accomplish this in both basic science and clinical settings and follow the general philosophy of information-based approache
Resumo:
Government communication is an important management tool during a public health crisis, but understanding its impact is difficult. Strategies may be adjusted in reaction to developments on the ground and it is challenging to evaluate the impact of communication separately from other crisis management activities. Agent-based modeling is a well-established research tool in social science to respond to similar challenges. However, there have been few such models in public health. We use the example of the TELL ME agent-based model to consider ways in which a non-predictive policy model can assist policy makers. This model concerns individuals' protective behaviors in response to an epidemic, and the communication that influences such behavior. Drawing on findings from stakeholder workshops and the results of the model itself, we suggest such a model can be useful: (i) as a teaching tool, (ii) to test theory, and (iii) to inform data collection. We also plot a path for development of similar models that could assist with communication planning for epidemics.
Resumo:
From a new perspective, this paper clarifies the internal and external factors affecting the carbon assets on the basis of induction of the connotation. It takes the enterprise business as the source of carbon assets, and makes an automotive group as an example, and establishes a network of its passenger car business activities based on the topological structure. This paper provides a method for identifying carbon assets from the relationship of business activities, and explains the formation mechanism of different assets from which it refines its network, and puts forward a reference to re-identify enterprise carbon assets from the perspective of development.
Resumo:
Introduction For a long time, language learning research focusing on young learners was a neglected field of research. Most empirical studies within the broad area of second/foreign language acquisition were instead carried out among adults in tertiary education and it was not until in the 1990s that the scope of research broadened to include also young learners, then loosely defined as children in primary and/or secondary education (see, for example, Hasselgreen & Drew, 2012; McKay, 2006; Nikolov, 2009a). In fact, some agreement upon how to define ‘young learners’ was not properly discussed until in 2013, when Gail Ellis (2013) provided some useful clarifications as regards how to label learners within the broad age-span that encompasses both primary and secondary school. In short, based on a literature overview, she concludes that the term young learners is most often used for children between the ages of five and eleven/twelve, which in most countries would be equivalent to learners in primary school. Thus, since young learners did not catch much scholarly attention until fairly recently, research volumes on the topic have been scarce. However, with a rapidly growing interest in examining how small children learn foreign languages, there has been a sudden increase in terms of the number of books available targeting young language learners. A first, major contribution was Nikolov’s (2009b) Early learning of modern foreign languages, in which 16 studies of young language learners from different countries are accounted for. Another important contribution is the edited book that will be reviewed here, which specifically targets studies about various aspects of second/foreign language learning among young (mainly Norwegian) learners. Bearing in mind that Norway and Sweden are very similar countries in terms of schooling, language background, and demographics – only to give three examples of similarities between these two nations – it is particularly relevant for Swedish scholars within the fields of education and second language acquisition to become familiar with research findings from the neighboring country. In this review, the editors and the outline of the book are first described, then brief summaries of each chapter are provided, before the text closes with an evaluation of the volume.
Resumo:
Purpose – Graffiti, both ancient and contemporary, could be argued to be significant and therefore worthy of protection. Attaching value is, however, subjective with no specific method being solely utilised for evaluating these items. The purpose of this paper to help those who are attempting to evaluate the merit of graffiti to do so, by determining “cultural significance”, which is a widely adopted concept for attaching value to the historic built environment. The current Scottish system utilised to assess “cultural significance” is the Scottish Historic Environment Policy (SHEP) which shares many common features with other determinants of cultural significance in different countries. The SHEP document, as with other systems, could however be criticised for being insufficiently sensitive to enable the evaluation of historic graffiti due, in part, to the subjective nature of determination of aesthetic value. Design/methodology/approach – A review of literature is followed by consideration of case studies taken from a variety of historical and geographical contexts. The majority of examples of graffiti included in this paper have been selected for their relative high profile, previous academic study, and breadth of geographic spread. This selection will hopefully enable a relatively comprehensive, rational assessment to be undertaken. That being said, one example has been integrated to reflect commonly occurring graffiti that is typical to all of the built environment. Findings – The determination of aesthetic value is particularly problematic for the evaluator and the use of additional art‐based mechanisms such as “significant form”, “self expression” and “meaning” may aid this process. Regrettably, these determinants are also in themselves subjective, enhancing complexity of evaluation. Almost all graffiti could be said to have artistic merit, using the aforementioned determinants. However, whether it is “good” art is an all together different question. The evaluation of “good” art and graffiti would have traditionally been evaluated by experts. Today, determination of graffiti should be evaluated and value attached by broader society, community groups, and experts alike. Originality/value – This research will assist those responsible for historic building conservation with the evaluation of whether graffiti is worthy of conservation.
Resumo:
With the world of professional sports shifting towards employing better sport analytics, the demand for vision-based performance analysis is growing increasingly in recent years. In addition, the nature of many sports does not allow the use of any kind of sensors or other wearable markers attached to players for monitoring their performances during competitions. This provides a potential application of systematic observations such as tracking information of the players to help coaches to develop their visual skills and perceptual awareness needed to make decisions about team strategy or training plans. My PhD project is part of a bigger ongoing project between sport scientists and computer scientists involving also industry partners and sports organisations. The overall idea is to investigate the contribution technology can make to the analysis of sports performance on the example of team sports such as rugby, football or hockey. A particular focus is on vision-based tracking, so that information about the location and dynamics of the players can be gained without any additional sensors on the players. To start with, prior approaches on visual tracking are extensively reviewed and analysed. In this thesis, methods to deal with the difficulties in visual tracking to handle the target appearance changes caused by intrinsic (e.g. pose variation) and extrinsic factors, such as occlusion, are proposed. This analysis highlights the importance of the proposed visual tracking algorithms, which reflect these challenges and suggest robust and accurate frameworks to estimate the target state in a complex tracking scenario such as a sports scene, thereby facilitating the tracking process. Next, a framework for continuously tracking multiple targets is proposed. Compared to single target tracking, multi-target tracking such as tracking the players on a sports field, poses additional difficulties, namely data association, which needs to be addressed. Here, the aim is to locate all targets of interest, inferring their trajectories and deciding which observation corresponds to which target trajectory is. In this thesis, an efficient framework is proposed to handle this particular problem, especially in sport scenes, where the players of the same team tend to look similar and exhibit complex interactions and unpredictable movements resulting in matching ambiguity between the players. The presented approach is also evaluated on different sports datasets and shows promising results. Finally, information from the proposed tracking system is utilised as the basic input for further higher level performance analysis such as tactics and team formations, which can help coaches to design a better training plan. Due to the continuous nature of many team sports (e.g. soccer, hockey), it is not straightforward to infer the high-level team behaviours, such as players’ interaction. The proposed framework relies on two distinct levels of performance analysis: low-level performance analysis, such as identifying players positions on the play field, as well as a high-level analysis, where the aim is to estimate the density of player locations or detecting their possible interaction group. The related experiments show the proposed approach can effectively explore this high-level information, which has many potential applications.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
There may be advantages to be gained by combining Case-Based Reasoning (CBR) techniques with numerical models. In this paper we consider how CBR can be used as a flexible query engine to improve the usability of numerical models. Particularly they can help to solve inverse and mixed problems, and to solve constraint problems. We discuss this idea with reference to the illustrative example of a pneumatic conveyor. We describe a model of the problem of particle degradation in such a conveyor, and the problems faced by design engineers. The solution of these problems requires a system that allows iterative sharing of control between user, CBR system, and numerical model. This multi-initiative interaction is illustrated for the pneumatic conveyor by means of Unified Modeling Language (UML) collaboration and sequence diagrams. We show approaches to the solution of these problems via a CBR tool.
Resumo:
In this paper, we present a case-based reasoning (CBR) approach solving educational time-tabling problems. Following the basic idea behind CBR, the solutions of previously solved problems are employed to aid finding the solutions for new problems. A list of feature-value pairs is insufficient to represent all the necessary information. We show that attribute graphs can represent more information and thus can help to retrieve re-usable cases that have similar structures to the new problems. The case base is organised as a decision tree to store the attribute graphs of solved problems hierarchically. An example is given to illustrate the retrieval, re-use and adaptation of structured cases. The results from our experiments show the effectiveness of the retrieval and adaptation in the proposed method.
Resumo:
This thesis presents quantitative studies of T cell and dendritic cell (DC) behaviour in mouse lymph nodes (LNs) in the naive state and following immunisation. These processes are of importance and interest in basic immunology, and better understanding could improve both diagnostic capacity and therapeutic manipulations, potentially helping in producing more effective vaccines or developing treatments for autoimmune diseases. The problem is also interesting conceptually as it is relevant to other fields where 3D movement of objects is tracked with a discrete scanning interval. A general immunology introduction is presented in chapter 1. In chapter 2, I apply quantitative methods to multi-photon imaging data to measure how T cells and DCs are spatially arranged in LNs. This has been previously studied to describe differences between the naive and immunised state and as an indicator of the magnitude of the immune response in LNs, but previous analyses have been generally descriptive. The quantitative analysis shows that some of the previous conclusions may have been premature. In chapter 3, I use Bayesian state-space models to test some hypotheses about the mode of T cell search for DCs. A two-state mode of movement where T cells can be classified as either interacting to a DC or freely migrating is supported over a model where T cells would home in on DCs at distance through for example the action of chemokines. In chapter 4, I study whether T cell migration is linked to the geometric structure of the fibroblast reticular network (FRC). I find support for the hypothesis that the movement is constrained to the fibroblast reticular cell (FRC) network over an alternative 'random walk with persistence time' model where cells would move randomly, with a short-term persistence driven by a hypothetical T cell intrinsic 'clock'. I also present unexpected results on the FRC network geometry. Finally, a quantitative method is presented for addressing some measurement biases inherent to multi-photon imaging. In all three chapters, novel findings are made, and the methods developed have the potential for further use to address important problems in the field. In chapter 5, I present a summary and synthesis of results from chapters 3-4 and a more speculative discussion of these results and potential future directions.
Resumo:
In this paper we use concepts from graph theory and cellular biology represented as ontologies, to carry out semantic mining tasks on signaling pathway networks. Specifically, the paper describes the semantic enrichment of signaling pathway networks. A cell signaling network describes the basic cellular activities and their interactions. The main contribution of this paper is in the signaling pathway research area, it proposes a new technique to analyze and understand how changes in these networks may affect the transmission and flow of information, which produce diseases such as cancer and diabetes. Our approach is based on three concepts from graph theory (modularity, clustering and centrality) frequently used on social networks analysis. Our approach consists into two phases: the first uses the graph theory concepts to determine the cellular groups in the network, which we will call them communities; the second uses ontologies for the semantic enrichment of the cellular communities. The measures used from the graph theory allow us to determine the set of cells that are close (for example, in a disease), and the main cells in each community. We analyze our approach in two cases: TGF-β and the Alzheimer Disease.