27 resultados para Concept-based Terminology
Resumo:
The Complex Adaptive Systems, Cognitive Agents and Distributed Energy (CASCADE) project is developing a framework based on Agent Based Modelling (ABM). The CASCADE Framework can be used both to gain policy and industry relevant insights into the smart grid concept itself and as a platform to design and test distributed ICT solutions for smart grid based business entities. ABM is used to capture the behaviors of diff erent social, economic and technical actors, which may be defi ned at various levels of abstraction. It is applied to understanding their interactions and can be adapted to include learning processes and emergent patterns. CASCADE models ‘prosumer’ agents (i.e., producers and/or consumers of energy) and ‘aggregator’ agents (e.g., traders of energy in both wholesale and retail markets) at various scales, from large generators and Energy Service Companies down to individual people and devices. The CASCADE Framework is formed of three main subdivisions that link models of electricity supply and demand, the electricity market and power fl ow. It can also model the variability of renewable energy generation caused by the weather, which is an important issue for grid balancing and the profi tability of energy suppliers. The development of CASCADE has already yielded some interesting early fi ndings, demonstrating that it is possible for a mediating agent (aggregator) to achieve stable demandfl attening across groups of domestic households fi tted with smart energy control and communication devices, where direct wholesale price signals had previously been found to produce characteristic complex system instability. In another example, it has demonstrated how large changes in supply mix can be caused even by small changes in demand profi le. Ongoing and planned refi nements to the Framework will support investigation of demand response at various scales, the integration of the power sector with transport and heat sectors, novel technology adoption and diffusion work, evolution of new smart grid business models, and complex power grid engineering and market interactions.
Resumo:
The present study aims to contribute to an understanding of the complexity of lobbying activities within the accounting standard-setting process in the UK. The paper reports detailed content analysis of submission letters to four related exposure drafts. These preceded two accounting standards that set out the concept of control used to determine the scope of consolidation in the UK, except for reporting under international standards. Regulation on the concept of control provides rich patterns of lobbying behaviour due to its controversial nature and its significance to financial reporting. Our examination is conducted by dividing lobbyists into two categories, corporate and non-corporate, which are hypothesised (and demonstrated) to lobby differently. In order to test the significance of these differences we apply ANOVA techniques and univariate regression analysis. Corporate respondents are found to devote more attention to issues of specific applicability of the concept of control, whereas non-corporate respondents tend to devote more attention to issues of general applicability of this concept. A strong association between the issues raised by corporate respondents and their line of business is revealed. Both categories of lobbyists are found to advance conceptually-based arguments more often than economic consequences-based or combined arguments. However, when economic consequences-based arguments are used, they come exclusively from the corporate category of respondents.
Resumo:
This contribution proposes a novel probability density function (PDF) estimation based over-sampling (PDFOS) approach for two-class imbalanced classification problems. The classical Parzen-window kernel function is adopted to estimate the PDF of the positive class. Then according to the estimated PDF, synthetic instances are generated as the additional training data. The essential concept is to re-balance the class distribution of the original imbalanced data set under the principle that synthetic data sample follows the same statistical properties. Based on the over-sampled training data, the radial basis function (RBF) classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier’s structure and the parameters of RBF kernels are determined using a particle swarm optimisation algorithm based on the criterion of minimising the leave-one-out misclassification rate. The effectiveness of the proposed PDFOS approach is demonstrated by the empirical study on several imbalanced data sets.
Resumo:
Purpose – The aim of this paper is to investigate how values from within Abrahamic religions could be adopted to improve liberal market economies’ (LMEs’) corporate governance business practices. Design/methodology/approach – The concept of spiritual capitalism is explained from an Islamic perspective by adopting three universal Abrahamic values to critically analyse LMEs and offer an ethical alternative to current capitalism concerns. Findings – It is found that LMEs can be improved by considering all stakeholders, putting ethics before economics, and introducing shared risk/reward plus lower debt. Originality/value – The paper compares LMEs/Co-ordinated market economies (CMEs)/Islamic countries economies (ICEs) within an ethical framework for LMEs.
Resumo:
Home-based online business ventures are an increasingly pervasive yet under-researched phenomenon. The experiences and mindset of entrepreneurs setting up and running such enterprises require better understanding. Using data from a qualitative study of 23 online home-based business entrepreneurs, we propose the augmented concept of ‘mental mobility’ to encapsulate how they approach their business activities. Drawing on Howard P. Becker's early theorising of mobility, together with Victor Turner's later notion of liminality, we conceptualise mental mobility as the process through which individuals navigate the liminal spaces between the physical and digital spheres of work and the overlapping home/workplace, enabling them to manipulate and partially reconcile the spatial, temporal and emotional tensions that are present in such work environments. Our research also holds important applications for alternative employment contexts and broader social orderings because of the increasingly pervasive and disruptive influence of technology on experiences of remunerated work.
Resumo:
ESA’s first multi-satellite mission Cluster is unique in its concept of 4 satellites orbiting in controlled formations. This will give an unprecedented opportunity to study structure and dynamics of the magnetosphere. In this paper we discuss ways in which ground-based remote-sensing observations of the ionosphere can be used to support the multipoint in-situ satellite measurements. There are a very large number of potentially useful configurations between the satellites and any one ground-based observatory; however, the number of ideal occurrences for any one configuration is low. Many of the ground-based instruments cannot operate continuously and Cluster will take data only for a part of each orbit, depending on how much high-resolution (‘burst-mode’) data are acquired. In addition, there are a great many instrument modes and the formation, size and shape of the cluster of the four satellites to consider. These circumstances create a clear and pressing need for careful planning to ensure that the scientific return from Cluster is maximised by additional coordinated ground-based observations. For this reason, ESA established a working group to coordinate the observations on the ground with Cluster. We will give a number of examples how the combined spacecraft and ground-based observations can address outstanding questions in magnetospheric physics. An online computer tool has been prepared to allow for the planning of conjunctions and advantageous constellations between the Cluster spacecraft and individual or combined ground-based systems. During the mission a ground-based database containing index and summary data will help to identify interesting datasets and allow to select intervals for coordinated studies. We illustrate the philosophy of our approach, using a few important examples of the many possible configurations between the satellite and the ground-based instruments.
Resumo:
Advances in hardware and software technologies allow to capture streaming data. The area of Data Stream Mining (DSM) is concerned with the analysis of these vast amounts of data as it is generated in real-time. Data stream classification is one of the most important DSM techniques allowing to classify previously unseen data instances. Different to traditional classifiers for static data, data stream classifiers need to adapt to concept changes (concept drift) in the stream in real-time in order to reflect the most recent concept in the data as accurately as possible. A recent addition to the data stream classifier toolbox is eRules which induces and updates a set of expressive rules that can easily be interpreted by humans. However, like most rule-based data stream classifiers, eRules exhibits a poor computational performance when confronted with continuous attributes. In this work, we propose an approach to deal with continuous data effectively and accurately in rule-based classifiers by using the Gaussian distribution as heuristic for building rule terms on continuous attributes. We show on the example of eRules that incorporating our method for continuous attributes indeed speeds up the real-time rule induction process while maintaining a similar level of accuracy compared with the original eRules classifier. We termed this new version of eRules with our approach G-eRules.
Resumo:
An important application of Big Data Analytics is the real-time analysis of streaming data. Streaming data imposes unique challenges to data mining algorithms, such as concept drifts, the need to analyse the data on the fly due to unbounded data streams and scalable algorithms due to potentially high throughput of data. Real-time classification algorithms that are adaptive to concept drifts and fast exist, however, most approaches are not naturally parallel and are thus limited in their scalability. This paper presents work on the Micro-Cluster Nearest Neighbour (MC-NN) classifier. MC-NN is based on an adaptive statistical data summary based on Micro-Clusters. MC-NN is very fast and adaptive to concept drift whilst maintaining the parallel properties of the base KNN classifier. Also MC-NN is competitive compared with existing data stream classifiers in terms of accuracy and speed.
Resumo:
Objectives In this study a prototype of a new health forecasting alert system is developed, which is aligned to the approach used in the Met Office’s (MO) National Severe Weather Warning Service (NSWWS). This is in order to improve information available to responders in the health and social care system by linking temperatures more directly to risks of mortality, and developing a system more coherent with other weather alerts. The prototype is compared to the current system in the Cold Weather and Heatwave plans via a case-study approach to verify its potential advantages and shortcomings. Method The prototype health forecasting alert system introduces an “impact vs likelihood matrix” for the health impacts of hot and cold temperatures which is similar to those used operationally for other weather hazards as part of the NSWWS. The impact axis of this matrix is based on existing epidemiological evidence, which shows an increasing relative risk of death at extremes of outdoor temperature beyond a threshold which can be identified epidemiologically. The likelihood axis is based on a probability measure associated with the temperature forecast. The new method is tested for two case studies (one during summer 2013, one during winter 2013), and compared to the performance of the current alert system. Conclusions The prototype shows some clear improvements over the current alert system. It allows for a much greater degree of flexibility, provides more detailed regional information about the health risks associated with periods of extreme temperatures, and is more coherent with other weather alerts which may make it easier for front line responders to use. It will require validation and engagement with stakeholders before it can be considered for use.
Resumo:
The frontal pole corresponds to Brodmann area (BA) 10, the largest single architectonic area in the human frontal lobe. Generally, BA10 is thought to contain two or three subregions that subserve broad functions such as multitasking, social cognition, attention, and episodic memory. However, there is a substantial debate about the functional and structural heterogeneity of this large frontal region. Previous connectivity-based parcellation studies have identified two or three subregions in the human frontal pole. Here, we used diffusion tensor imaging to assess structural connectivity of BA10 in 35 healthy subjects and delineated subregions based on this connectivity. This allowed us to determine the correspondence of structurally based subregions with the scheme previously defined functionally. Three subregions could be defined in each subject. However, these three subregions were not spatially consistent between subjects. Therefore, we accepted a solution with two subregions that encompassed the lateral and medial frontal pole. We then examined resting-state functional connectivity of the two subregions and found significant differences between their connectivities. The medial cluster was connected to nodes of the default-mode network, which is implicated in internally focused, self-related thought, and social cognition. The lateral cluster was connected to nodes of the executive control network, associated with directed attention and working memory. These findings support the concept that there are two major anatomical subregions of the frontal pole related to differences in functional connectivity.
Resumo:
Dietary management of the human gut microbiota towards a more beneficial composition is one approach that may improve host health. To date, a large number of human intervention studies have demonstrated that dietary consumption of certain food products can result in significant changes in the composition of the gut microbiota i.e. the prebiotic concept. Thus the prebiotic effect is now established as a dietary approach to increase beneficial gut bacteria and it has been associated with modulation of health biomarkers and modulation of the immune system. Promitor™ Soluble Corn Fibre (SCF) is a well-known maize-derived source of dietary fibre with potential selective fermentation properties. Our aim was to determine the optimum prebiotic dose of tolerance, desired changes to microbiota and fermentation of SCF in healthy adult subjects. A double-blind, randomised, parallel study was completed where volunteers (n = 8/treatment group) consumed 8, 14 or 21 g from SCF (6, 12 and 18 g/fibre delivered respectively) over 14-d. Over the range of doses studied, SCF was well tolerated Numbers of bifidobacteria were significantly higher for the 6 g/fibre/day compared to 12g and 18g/fibre delivered/day (mean 9.25 and 9.73 Log10 cells/g fresh faeces in the pre-treatment and treatment periods respectively). Such a numerical change of 0.5 Log10 bifidobacteria/g fresh faeces is consistent with those changes observed for inulin-type fructans, which are recognised prebiotics. A possible prebiotic effect of SCF was therefore demonstrated by its stimulation of bifidobacteria numbers in the overall gut microbiota during a short-term intervention.
Resumo:
Trust and reputation are important factors that influence the success of both traditional transactions in physical social networks and modern e-commerce in virtual Internet environments. It is difficult to define the concept of trust and quantify it because trust has both subjective and objective characteristics at the same time. A well-reported issue with reputation management system in business-to-consumer (BtoC) e-commerce is the “all good reputation” problem. In order to deal with the confusion, a new computational model of reputation is proposed in this paper. The ratings of each customer are set as basic trust score events. In addition, the time series of massive ratings are aggregated to formulate the sellers’ local temporal trust scores by Beta distribution. A logical model of trust and reputation is established based on the analysis of the dynamical relationship between trust and reputation. As for single goods with repeat transactions, an iterative mathematical model of trust and reputation is established with a closed-loop feedback mechanism. Numerical experiments on repeated transactions recorded over a period of 24 months are performed. The experimental results show that the proposed method plays guiding roles for both theoretical research into trust and reputation and the practical design of reputation systems in BtoC e-commerce.