902 resultados para Sequential encounters
Resumo:
Aim of the work is the implementation of a low temperature reforming (LT reforming) unit downstream the Haloclean pyrolyser in order to enhance the heating value of the pyrolysis gas. Outside the focus of this work was to gain a synthesis gas quality for further use. Temperatures between 400 °C and 500 °C were applied. A commercial pre-reforming catalyst on a nickel basis from Südchemie was chosen for LT reforming. As biogenic feedstock wheat straw has been used. Pyrolysis of wheat straw at 450 °C by means of Haloclean pyrolysis leads to 28% of char, 50% of condensate and 22% of gas. The condensate separates in a water phase and an organic phase. The organic phase is liquid, but contains viscous compounds. These compounds could underlay aging and could lead to solid tars which can cause post processing problems. Therefore, the implementation of a catalytic reformer is not only of interest from an energetic point of view, it is generally interesting for tar conversion purposes after pyrolysis applications. By using a fixed bed reforming unit at 450–490 °C and space velocities about 3000 l/h the pyrolysis gas volume flow could be increased to about 58%. This corresponds to a decrease of the yields of condensates by means of catalysis up to 17%, the yield of char remains unchanged, since pyrolysis conditions are the same. The heating value in the pyrolysis gas could be increased by the factor of 1.64. Hydrogen concentrations up to 14% could be realised.
Resumo:
An essential stage in endocytic coated vesicle recycling is the dissociation of clathrin from the vesicle coat by the molecular chaperone, 70-kDa heat-shock cognate protein (Hsc70), and the J-domain-containing protein, auxilin, in an ATP-dependent process. We present a detailed mechanistic analysis of clathrin disassembly catalyzed by Hsc70 and auxilin, using loss of perpendicular light scattering to monitor the process. We report that a single auxilin per clathrin triskelion is required for maximal rate of disassembly, that ATP is hydrolyzed at the same rate that disassembly occurs, and that three ATP molecules are hydrolyzed per clathrin triskelion released. Stopped-flow measurements revealed a lag phase in which the scattering intensity increased owing to association of Hsc70 with clathrin cages followed by serial rounds of ATP hydrolysis prior to triskelion removal. Global fit of stopped-flow data to several physically plausible mechanisms showed the best fit to a model in which sequential hydrolysis of three separate ATP molecules is required for the eventual release of a triskelion from the clathrin-auxilin cage.
Resumo:
The objective of this study was to investigate the effects of circularity, comorbidity, prevalence and presentation variation on the accuracy of differential diagnoses made in optometric primary care using a modified form of naïve Bayesian sequential analysis. No such investigation has ever been reported before. Data were collected for 1422 cases seen over one year. Positive test outcomes were recorded for case history (ethnicity, age, symptoms and ocular and medical history) and clinical signs in relation to each diagnosis. For this reason only positive likelihood ratios were used for this modified form of Bayesian analysis that was carried out with Laplacian correction and Chi-square filtration. Accuracy was expressed as the percentage of cases for which the diagnoses made by the clinician appeared at the top of a list generated by Bayesian analysis. Preliminary analyses were carried out on 10 diagnoses and 15 test outcomes. Accuracy of 100% was achieved in the absence of presentation variation but dropped by 6% when variation existed. Circularity artificially elevated accuracy by 0.5%. Surprisingly, removal of Chi-square filtering increased accuracy by 0.4%. Decision tree analysis showed that accuracy was influenced primarily by prevalence followed by presentation variation and comorbidity. Analysis of 35 diagnoses and 105 test outcomes followed. This explored the use of positive likelihood ratios, derived from the case history, to recommend signs to look for. Accuracy of 72% was achieved when all clinical signs were entered. The drop in accuracy, compared to the preliminary analysis, was attributed to the fact that some diagnoses lacked strong diagnostic signs; the accuracy increased by 1% when only recommended signs were entered. Chi-square filtering improved recommended test selection. Decision tree analysis showed that accuracy again influenced primarily by prevalence, followed by comorbidity and presentation variation. Future work will explore the use of likelihood ratios based on positive and negative test findings prior to considering naïve Bayesian analysis as a form of artificial intelligence in optometric practice.
Resumo:
When designing a practical swarm robotics system, self-organized task allocation is key to make best use of resources. Current research in this area focuses on task allocation which is either distributed (tasks must be performed at different locations) or sequential (tasks are complex and must be split into simpler sub-tasks and processed in order). In practice, however, swarms will need to deal with tasks which are both distributed and sequential. In this paper, a classic foraging problem is extended to incorporate both distributed and sequential tasks. The problem is analysed theoretically, absolute limits on performance are derived, and a set of conditions for a successful algorithm are established. It is shown empirically that an algorithm which meets these conditions, by causing emergent cooperation between robots can achieve consistently high performance under a wide range of settings without the need for communication. © 2013 IEEE.
Resumo:
∗ Supported by the Serbian Scientific Foundation, grant No 04M01
Resumo:
Sequential pattern mining is an important subject in data mining with broad applications in many different areas. However, previous sequential mining algorithms mostly aimed to calculate the number of occurrences (the support) without regard to the degree of importance of different data items. In this paper, we propose to explore the search space of subsequences with normalized weights. We are not only interested in the number of occurrences of the sequences (supports of sequences), but also concerned about importance of sequences (weights). When generating subsequence candidates we use both the support and the weight of the candidates while maintaining the downward closure property of these patterns which allows to accelerate the process of candidate generation.
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
In this paper we discuss how an innovative audio-visual project was adopted to foster active, rather than declarative learning, in critical International Relations (IR). First, we explore the aesthetic turn in IR, to contrast this with forms of representation that have dominated IR scholarship. Second, we describe how students were asked to record short audio or video projects to explore their own insights through aesthetic and non-written formats. Third, we explain how these projects are understood to be deeply embedded in social science methodologies. We cite our inspiration from applying a personal sociological imagination, as a way to counterbalance a ‘marketised’ slant in higher education, in a global economy where students are often encouraged to consume, rather than produce knowledge. Finally, we draw conclusions in terms of deeper forms of student engagement leading to new ways of thinking and presenting new skills and new connections between theory and practice.
Resumo:
This article explores powerful, constraining representations of encounters between digital technologies and the bodies of students and teachers, using corpus-based Critical Discourse Analysis (CDA). It discusses examples from a corpus of UK Higher Education (HE) policy documents, and considers how confronting such documents may strengthen arguments from educators against narrow representations of an automatically enhanced learning. Examples reveal that a promise of enhanced ‘student experience’ through information and communication technologies internalizes the ideological constructs of technology and policy makers, to reinforce a primary logic of exchange value. The identified dominant discursive patterns are closely linked to the Californian ideology. By exposing these texts, they provide a form of ‘linguistic resistance’ for educators to disrupt powerful processes that serve the interests of a neoliberal social imaginary. To mine this current crisis of education, the authors introduce productive links between a Networked Learning approach and a posthumanist perspective. The Networked Learning approach emphasises conscious choices between political alternatives, which in turn could help us reconsider ways we write about digital technologies in policy. Then, based on the works of Haraway, Hayles, and Wark, a posthumanist perspective places human digital learning encounters at the juncture of non-humans and politics. Connections between the Networked Learning approach and the posthumanist perspective are necessary in order to replace a discourse of (mis)representations with a more performative view towards the digital human body, which then becomes situated at the centre of teaching and learning. In practice, however, establishing these connections is much more complex than resorting to the typically straightforward common sense discourse encountered in the Critical Discourse Analysis, and this may yet limit practical applications of this research in policy making.
Resumo:
This paper addresses a problem with an argument in Kranich, Perea, and Peters (2005) supporting their definition of the Weak Sequential Core and their characterization result. We also provide the remedy, a modification of the definition, to rescue the characterization.
Resumo:
This paper uses self-efficacy to predict the success of women in introductory physics. We show how sequential logistic regression demonstrates the predictive ability of self-efficacy, and reveals variations with type of physics course. Also discussed are the sources of self-efficacy that have the largest impact on predictive ability.
Resumo:
Attempts to improve the level of customer service delivered have resulted in an increased use of technology in the customer service environment. Customer-contact employees are expected to use computers to help them in providing better service encounters for customers. This research study done in a business-to-business environment explored the effects of customer-contact employees' computer self efficacy and positive mood on in-role customer service, extra-role customer service and organization citizenship. It also examined the relationship of customer service to customer satisfaction and customer delight. ^ Research questions were analyzed using descriptive statistics, frequency distributions, correlation analysis, and regression analysis. Results indicated that computer self efficacy had a greater impact on extra-role customer service than it did on in-role customer service. Positive mood had a positive moderating influence on extra-role customer service but not on in-role customer service. ^ There was a significant relationship between in-role customer service and customer satisfaction but not between extra-role customer service and customer satisfaction. There was no significant relationship between in-role customer service and customer delight nor between extra-role customer service and customer delight. There was a statistically greater positive relationship between joy experienced by clients and customer delight than between pleasant surprise and customer delight. ^ This study demonstrated the importance of facilitating customer-contact employee positive mood on the job in order to improve the level of extra-role customer service delivered. It also showed that increasing the level of customer service does not necessarily lead to higher levels of customer satisfaction. ^