41 resultados para User Influence, Micro-blogging platform, Action-based Network, Dynamic Model
Resumo:
Despite being nominated as a key potential interaction technique for supporting today's mobile technology user, the widespread commercialisation of speech-based input is currently being impeded by unacceptable recognition error rates. Developing effective speech-based solutions for use in mobile contexts, given the varying extent of background noise, is challenging. The research presented in this paper is part of an ongoing investigation into how best to incorporate speechbased input within mobile data collection applications. Specifically, this paper reports on a comparison of three different commercially available microphones in terms of their efficacy to facilitate mobile, speech-based data entry. We describe, in detail, our novel evaluation design as well as the results we obtained.
Resumo:
The field of Semantic Web Services (SWS) has been recognized as one of the most promising areas of emergent research within the Semantic Web initiative, exhibiting an extensive commercial potential and attracting significant attention from both industry and the research community. Currently, there exist several different frameworks and languages for formally describing a Web Service: Web Ontology Language for Services (OWL-S), Web Service Modelling Ontology (WSMO) and Semantic Annotations for the Web Services Description Language (SAWSDL) are the most important approaches. To the inexperienced user, choosing the appropriate platform for a specific SWS application may prove to be challenging, given a lack of clear separation between the ideas promoted by the associated research communities. In this paper, we systematically compare OWL-S, WSMO and SAWSDL from various standpoints, namely, that of the service requester and provider as well as the broker-based view. The comparison is meant to help users to better understand the strengths and limitations of these different approaches to formalizing SWS, and to choose the most suitable solution for a given application. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
BACKGROUND: Tobacco industry interference has been identified as the greatest obstacle to the implementation of evidence-based measures to reduce tobacco use. Understanding and addressing industry interference in public health policy-making is therefore crucial. Existing conceptualisations of corporate political activity (CPA) are embedded in a business perspective and do not attend to CPA's social and public health costs; most have not drawn on the unique resource represented by internal tobacco industry documents. Building on this literature, including systematic reviews, we develop a critically informed conceptual model of tobacco industry political activity. METHODS AND FINDINGS: We thematically analysed published papers included in two systematic reviews examining tobacco industry influence on taxation and marketing of tobacco; we included 45 of 46 papers in the former category and 20 of 48 papers in the latter (n = 65). We used a grounded theory approach to build taxonomies of "discursive" (argument-based) and "instrumental" (action-based) industry strategies and from these devised the Policy Dystopia Model, which shows that the industry, working through different constituencies, constructs a metanarrative to argue that proposed policies will lead to a dysfunctional future of policy failure and widely dispersed adverse social and economic consequences. Simultaneously, it uses diverse, interlocking insider and outsider instrumental strategies to disseminate this narrative and enhance its persuasiveness in order to secure its preferred policy outcomes. Limitations are that many papers were historical (some dating back to the 1970s) and focused on high-income regions. CONCLUSIONS: The model provides an evidence-based, accessible way of understanding diverse corporate political strategies. It should enable public health actors and officials to preempt these strategies and develop realistic assessments of the industry's claims.
Resumo:
The retrieval of wind fields from scatterometer observations has traditionally been separated into two phases; local wind vector retrieval and ambiguity removal. Operationally, a forward model relating wind vector to backscatter is inverted, typically using look up tables, to retrieve up to four local wind vector solutions. A heuristic procedure, using numerical weather prediction forecast wind vectors and, often, some neighbourhood comparison is then used to select the correct solution. In this paper we develop a Bayesian method for wind field retrieval, and show how a direct local inverse model, relating backscatter to wind vector, improves the wind vector retrieval accuracy. We compare these results with the operational U.K. Meteorological Office retrievals, our own CMOD4 retrievals and a neural network based local forward model retrieval. We suggest that the neural network based inverse model, which is extremely fast to use, improves upon current forward models when used in a variational data assimilation scheme.
Resumo:
Current methods for retrieving near surface winds from scatterometer observations over the ocean surface require a foward sensor model which maps the wind vector to the measured backscatter. This paper develops a hybrid neural network forward model, which retains the physical understanding embodied in ¸mod, but incorporates greater flexibility, allowing a better fit to the observations. By introducing a separate model for the mid-beam and using a common model for the fore- and aft-beams, we show a significant improvement in local wind vector retrieval. The hybrid model also fits the scatterometer observations more closely. The model is trained in a Bayesian framework, accounting for the noise on the wind vector inputs. We show that adding more high wind speed observations in the training set improves wind vector retrieval at high wind speeds without compromising performance at medium or low wind speeds.
Resumo:
Risk and knowledge are two concepts and components of business management which have so far been studied almost independently. This is especially true where risk management (RM) is conceived mainly in financial terms, as for example, in the financial institutions sector. Financial institutions are affected by internal and external changes with the consequent accommodation to new business models, new regulations and new global competition that includes new big players. These changes induce financial institutions to develop different methodologies for managing risk, such as the enterprise risk management (ERM) approach, in order to adopt a holistic view of risk management and, consequently, to deal with different types of risk, levels of risk appetite, and policies in risk management. However, the methodologies for analysing risk do not explicitly include knowledge management (KM). This research examines the potential relationships between KM and two RM concepts: perceived quality of risk control and perceived value of ERM. To fulfill the objective of identifying how KM concepts can have a positive influence on some RM concepts, a literature review of KM and its processes and RM and its processes was performed. From this literature review eight hypotheses were analysed using a classification into people, process and technology variables. The data for this research was gathered from a survey applied to risk management employees in financial institutions and 121 answers were analysed. The analysis of the data was based on multivariate techniques, more specifically stepwise regression analysis. The results showed that the perceived quality of risk control is significantly associated with the variables: perceived quality of risk knowledge sharing, perceived quality of communication among people, web channel functionality, and risk management information system functionality. However, the relationships of the KM variables to the perceived value of ERM are not identified because of the low performance of the models describing these relationships. The analysis reveals important insights into the potential KM support to RM such as: the better adoption of KM people and technology actions, the better the perceived quality of risk control. Equally, the results suggest that the quality of risk control and the benefits of ERM follow different patterns given that there is no correlation between both concepts and the distinct influence of the KM variables in each concept. The ERM scenario is different from that of risk control because ERM, as an answer to RM failures and adaptation to new regulation in financial institutions, has led organizations to adopt new processes, technologies, and governance models. Thus, the search for factors influencing the perceived value of ERM implementation needs additional analysis because what is improved in RM processes individually is not having the same effect on the perceived value of ERM. Based on these model results and the literature review the basis of the ERKMAS (Enterprise Risk Knowledge Management System) is presented.
Resumo:
Increasingly the body of knowledge derived from strategy theory has been criticized because it is not actionable in practice, particularly under the conditions of a knowledge economy. Since strategic management is an applied discipline this is a serious criticism. However, we argue that the theory-practice question is too simple. Accordingly, this paper expands this question by outlining first the theoretical criteria under which strategy theory is not actionable, and then outlines an alternative perspective on strategy knowledge in action, based upon a practice epistemology. The paper is in three sections. The first section explains two contextual conditions which impact upon strategy theory within a knowledge economy, environmental velocity and knowledge intensity. The impact of these contextual conditions upon the application of four different streams of strategy theory is examined. The second section suggests that the theoretical validity of these contextual conditions breaks down when we consider the knowledge artifacts, such as strategy tools and frameworks, which arise from strategy research. The third section proposes a practice epistemology for analyzing strategy knowledge in action that stands in contrast to more traditional arguments about actionable knowledge. From a practice perspective, strategy knowledge is argues to be actionable as part of the everyday activities of strategizing. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
B-ISDN is a universal network which supports diverse mixes of service, applications and traffic. ATM has been accepted world-wide as the transport technique for future use in B-ISDN. ATM, being a simple packet oriented transfer technique, provides a flexible means for supporting a continuum of transport rates and is efficient due to possible statistical sharing of network resources by multiple users. In order to fully exploit the potential statistical gain, while at the same time provide diverse service and traffic mixes, an efficient traffic control must be designed. Traffic controls which include congestion and flow control are a fundamental necessity to the success and viability of future B-ISDN. Congestion and flow control is difficult in the broadband environment due to the high speed link, the wide area distance, diverse service requirements and diverse traffic characteristics. Most congestion and flow control approaches in conventional packet switched networks are reactive in nature and are not applicable in the B-ISDN environment. In this research, traffic control procedures mainly based on preventive measures for a private ATM-based network are proposed and their performance evaluated. The various traffic controls include CAC, traffic flow enforcement, priority control and an explicit feedback mechanism. These functions operate at call level and cell level. They are carried out distributively by the end terminals, the network access points and the internal elements of the network. During the connection set-up phase, the CAC decides the acceptance or denial of a connection request and allocates bandwidth to the new connection according to three schemes; peak bit rate, statistical rate and average bit rate. The statistical multiplexing rate is based on a `bufferless fluid flow model' which is simple and robust. The allocation of an average bit rate to data traffic at the expense of delay obviously improves the network bandwidth utilisation.
Resumo:
Current methods for retrieving near-surface winds from scatterometer observations over the ocean surface require a forward sensor model which maps the wind vector to the measured backscatter. This paper develops a hybrid neural network forward model, which retains the physical understanding embodied in CMOD4, but incorporates greater flexibility, allowing a better fit to the observations. By introducing a separate model for the midbeam and using a common model for the fore and aft beams, we show a significant improvement in local wind vector retrieval. The hybrid model also fits the scatterometer observations more closely. The model is trained in a Bayesian framework, accounting for the noise on the wind vector inputs. We show that adding more high wind speed observations in the training set improves wind vector retrieval at high wind speeds without compromising performance at medium or low wind speeds. Copyright 2001 by the American Geophysical Union.
Resumo:
The thesis addresses the relative importance of factors affecting working-class school-leavers' post-compulsory education transitions into post-sixteen education, training, employment and unemployment. It focuses on school-leavers choosing to enter the labour market, whether successfully or not and the influences affecting this choice. Methodologically, the longitudinal approach followed young people from before they left school to a period of months after. Discrepancies between young people's intended and actual destinations emphasised the diverse influences on post-sixteen transitions. These influences were investigated through a dynamic multi-method approach, drawing from quantitative and qualitative methodologies providing depth and insight while locating the research within a structural framework, allowing a comparison with local and national trends. Two crucial factors of school and gender affected young people's intended and actual post-sixteen directions. School policy, including treatment of disaffected pupils and recruitment to a large, on-site sixth form, influenced the number of pupils opting to continue their education. Girls were more likely to continue education after the end of compulsory schooling and gave different reasons to boys for doing so. Family and peer groups were influential, helping young people develop a 'horizon for action' incorporating habitus and subjective preferences that specified acceptable post-sixteen directions. These influences operated within the context of the local labour market. Perception of the latter rather than actual conditions informed post-sixteen decisions; however, labour market reality influenced the success of the school-leavers' endeavours. The research found that the economics-based rational choice model of decision-making did not apply to many working class school-leavers. The cohort made pragmatically rational decisions dependent on their 'horizon for action'. based on partial, occasionally inaccurate information. Policy recommendations consider the careers service and structure or school sixth forms as aiding successful transitions from compulsory education into education, employment or training. The maintenance allowance may be ineffectual in tackling its objective of social inclusion.
Resumo:
In the processing industries particulate materials are often in the form of powders which themselves are agglomerations of much smaller sized particles. During powder processing operations agglomerate degradation occurs primarily as a result of collisions between agglomerates and between agglomerates and the process equipment. Due to the small size of the agglomerates and the very short duration of the collisions it is currently not possible to obtain sufficiently detailed quantitative information from real experiments to provide a sound theoretically based strategy for designing particles to prevent or guarantee breakage. However, with the aid of computer simulated experiments, the micro-examination of these short duration dynamic events is made possible. This thesis presents the results of computer simulated experiments on a 2D monodisperse agglomerate in which the algorithms used to model the particle-particle interactions have been derived from contact mechanics theories and, necessarily, incorporate contact adhesion. A detailed description of the theoretical background is included in the thesis. The results of the agglomerate impact simulations show three types of behaviour depending on whether the initial impact velocity is high, moderate or low. It is demonstrated that high velocity impacts produce extensive plastic deformation which leads to subsequent shattering of the agglomerate. At moderate impact velocities semi-brittle fracture is observed and there is a threshold velocity below which the agglomerate bounces off the wall with little or no visible damage. The micromechanical processes controlling these different types of behaviour are discussed and illustrated by computer graphics. Further work is reported to demonstrate the effect of impact velocity and bond strength on the damage produced. Empirical relationships between impact velocity, bond strength and damage are presented and their relevance to attrition and comminution is discussed. The particle size distribution curves resulting from the agglomerate impacts are also provided. Computer simulated diametrical compression tests on the same agglomerate have also been carried out. Simulations were performed for different platen velocities and different bond strengths. The results show that high platen velocities produce extensive plastic deformation and crushing. Low platen velocities produce semi-brittle failure in which cracks propagate from the platens inwards towards the centre of the agglomerate. The results are compared with the results of the agglomerate impact tests in terms of work input, applied velocity and damage produced.
Resumo:
This thesis explores how the world-wide-web can be used to support English language teachers doing further studies at a distance. The future of education worldwide is moving towards a requirement that we, as teacher educators, use the latest web technology not as a gambit, but as a viable tool to improve learning. By examining the literature on knowledge, teacher education and web training, a model of teacher knowledge development, along with statements of advice for web developers based upon the model are developed. Next, the applicability and viability of both the model and statements of advice are examined by developing a teacher support site (bttp://www. philseflsupport. com) according to these principles. The data collected from one focus group of users from sixteen different countries, all studying on the same distance Masters programme, is then analysed in depth. The outcomes from the research are threefold: A functioning website that is averaging around 15, 000 hits a month provides a professional contribution. An expanded model of teacher knowledge development that is based upon five theoretical principles that reflect the ever-expanding cyclical nature of teacher learning provides an academic contribution. A series of six statements of advice for developers of teacher support sites. These statements are grounded in the theoretical principles behind the model of teacher knowledge development and incorporate nine keys to effective web facilitation. Taken together, they provide a forward-looking contribution to the praxis of web supported teacher education, and thus to the potential dissemination of the research presented here. The research has succeeded in reducing the proliferation of terminology in teacher knowledge into a succinct model of teacher knowledge development. The model may now be used to further our understanding of how teachers learn and develop as other research builds upon the individual study here. NB: Appendix 4 is only available only available for consultation at Aston University Library with prior arrangement.
Resumo:
This thesis presents an analysis of the stability of complex distribution networks. We present a stability analysis against cascading failures. We propose a spin [binary] model, based on concepts of statistical mechanics. We test macroscopic properties of distribution networks with respect to various topological structures and distributions of microparameters. The equilibrium properties of the systems are obtained in a statistical mechanics framework by application of the replica method. We demonstrate the validity of our approach by comparing it with Monte Carlo simulations. We analyse the network properties in terms of phase diagrams and found both qualitative and quantitative dependence of the network properties on the network structure and macroparameters. The structure of the phase diagrams points at the existence of phase transition and the presence of stable and metastable states in the system. We also present an analysis of robustness against overloading in the distribution networks. We propose a model that describes a distribution process in a network. The model incorporates the currents between any connected hubs in the network, local constraints in the form of Kirchoff's law and a global optimizational criterion. The flow of currents in the system is driven by the consumption. We study two principal types of model: infinite and finite link capacity. The key properties are the distributions of currents in the system. We again use a statistical mechanics framework to describe the currents in the system in terms of macroscopic parameters. In order to obtain observable properties we apply the replica method. We are able to assess the criticality of the level of demand with respect to the available resources and the architecture of the network. Furthermore, the parts of the system, where critical currents may emerge, can be identified. This, in turn, provides us with the characteristic description of the spread of the overloading in the systems.
Resumo:
Studies using transcranial magnetic stimulation have demonstrated that action observation can modulate the activity of the corticospinal system. This has been attributed to the activity of an 'action observation network', whereby premotor cortex activity influences corticospinal excitability. Neuroimaging studies have demonstrated that the context in which participants observe actions (i.e. whether they simply attend to an action, or observe it with the intention to imitate) modulates action observation network activity. The study presented here examined whether the context in which actions were observed revealed similar modulatory effects on corticospinal excitability. Eight human participants observed a baseline stimulus (a fixation cross), observed actions in order to attend to them, or observed the same actions with the intention to imitate them. Whereas motor evoked potentials elicited from the first dorsal interosseus muscle of the hand were facilitated by attending to actions, observing the same actions in an imitative capacity led to no facilitation effect. Furthermore, no motor facilitation effects occurred in a control muscle. Electromyographic data collected when participants physically imitated the observed actions revealed that the activity of the first dorsal interosseus muscle increased significantly during action execution compared with rest. These data suggest that an inhibitory mechanism acts on the corticospinal system to prevent the immediate overt imitation of observed actions. These data provide novel insight into the properties of the human action observation network, demonstrating for the first time that observing actions with the intention to imitate them can modulate the effects of action observation on corticospinal excitability.
Resumo:
Oliver’s 1997 four-stage loyalty model proposes that loyalty consists of belief, affect, intention, and action. Although this loyalty model has recently been subject to empirical examination, the issue of moderator variables has been largely neglected. This article fills that void by analyzing the moderating effects of selected personal and situational characteristics, using a sample of 888 customers of a large do-it-yourself retailer. The results of multi-group causal analysis suggest that these moderators exert an influence on the development of the different stages of the loyalty sequence. Specifically, age, income, education and expertise, price orientation, critical incident recovery, and loyalty card membership are found to be important moderators of the links in the four-stage loyalty model. Limitations of the study are outlined, and implications for both research and managerial practice are discussed.