14 resultados para tree structured business data

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In multicriteria decision problems many values must be assigned, such as the importance of the different criteria and the values of the alternatives with respect to subjective criteria. Since these assignments are approximate, it is very important to analyze the sensitivity of results when small modifications of the assignments are made. When solving a multicriteria decision problem, it is desirable to choose a decision function that leads to a solution as stable as possible. We propose here a method based on genetic programming that produces better decision functions than the commonly used ones. The theoretical expectations are validated by case studies. © 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are concerned with the problem of image segmentation in which each pixel is assigned to one of a predefined finite number of classes. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of segmentations. Markov Random Fields (MRFs) have been used to incorporate some of this prior knowledge, but this not entirely satisfactory as inference in MRFs is NP-hard. The multiscale quadtree model of Bouman and Shapiro (1994) is an attractive alternative, as this is a tree-structured belief network in which inference can be carried out in linear time (Pearl 1988). It is an hierarchical model where the bottom-level nodes are pixels, and higher levels correspond to downsampled versions of the image. The conditional-probability tables (CPTs) in the belief network encode the knowledge of how the levels interact. In this paper we discuss two methods of learning the CPTs given training data, using (a) maximum likelihood and the EM algorithm and (b) emphconditional maximum likelihood (CML). Segmentations obtained using networks trained by CML show a statistically-significant improvement in performance on synthetic images. We also demonstrate the methods on a real-world outdoor-scene segmentation task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Government agencies use information technology extensively to collect business data for regulatory purposes. Data communication standards form part of the infrastructure with which businesses must conform to survive. We examine the development of, and emerging competition between, two open business reporting data standards adopted by government bodies in France; EDIFACT (incumbent) and XBRL (challenger). The research explores whether an incumbent may be displaced in a setting in which the contention is unresolved. We apply Latour’s (1992) translation map to trace the enrolments and detours in the battle. We find that regulators play an important role as allies in the development of the standards. The antecedent networks in which the standards are located embed strong beliefs that become barriers to collaboration and fuel the battle. One of the key differentiating attitudes is whether speed is more important than legitimacy. The failure of collaboration encourages competition. The newness of XBRL’s technology just as regulators need to respond to an economic crisis and its adoption by French regulators not using EDIFACT create an opportunity for the challenger to make significant network gains over the longer term. ANT also highlights the importance of the preservation of key components of EDIFACT in ebXML.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The NHS Health Check was designed by UK Department of Health to address increased prevalence of cardiovascular disease by identifying risk levels and facilitating behaviour change. It constituted biomedical testing, personalised advice and lifestyle support. The objective of the study was to explore Health Care Professionals' (HCPs) and patients' experiences of delivering and receiving the NHS Health Check in an inner-city region of England. Methods: Patients and HCPs in primary care were interviewed using semi-structured schedules. Data were analysed using Thematic Analysis. Results: Four themes were identified. Firstly, Health Check as a test of 'roadworthiness' for people. The roadworthiness metaphor resonated with some patients but it signified a passive stance toward illness. Some patients described the check as useful in the theme, Health check as revelatory. HCPs found visual aids demonstrating levels of salt/fat/sugar in everyday foods and a 'traffic light' tape measure helpful in communicating such 'revelations' with patients. Being SMART and following the protocolrevealed that few HCPs used SMART goals and few patients spoke of them. HCPs require training to understand their rationale compared with traditional advice-giving. The need for further follow-up revealed disparity in follow-ups and patients were not systematically monitored over time. Conclusions: HCPs' training needs to include the use and evidence of the effectiveness of SMART goals in changing health behaviours. The significance of fidelity to protocol needs to be communicated to HCPs and commissioners to ensure consistency. Monitoring and measurement of follow-up, e.g., tracking of referrals, need to be resourced to provide evidence of the success of the NHS Health Check in terms of healthier lifestyles and reduced CVD risk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Food allergy (FA) is aunique chronic condition as sufferers aregenerally well unless they accidentally ingest an allergen, whereupon symptoms can be life threatening. A good under-standing of the condition is essential forsuccessful self-management, however little is known about children and young teenagers' understanding. This study aimed toexplore understanding of FA in childrenwith and without FA and whether under-standing changes as children get older.Method: Participants aged 6–14 years (53with FA; 89 without), recruited from loc evidence of a prospective associationbetween maternal, perinatal or infant VDIand subsequent IgE-mediated FA schools and allergy clinics took part insemi-structured interviews; data were analysed using thematic analysis.Results: Three themes were identified fromthe data across the different age groups andallergy statuses: food allergy as a sickness, food allergy as an illness and food allergy asintolerance to food. Children aged 6–8 years described FA as a sickness; you were not allowed the food because it makes youpoorly. Children aged 9–11 years also talked about FA as something that makesyou poorly, but many also described it as anillness and understood that symptoms were caused by food. Children aged 12–14 yearsdescribed it as an intolerance or that FA was your body's response to a particularfood. These age-related differences wereseen in children with and without FA. Conclusion: Although sophistication ofknowledge of FA increases with age, it is still a little understood condition by chil-dren and young teenagers. Clear, age-re-lated information about food allergy andhow it should be managed is needed forthose with and without allergy, to avoidmisunderstanding, and aid awareness andbetter self-management of the condition

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite its importance in the development of competitive advantage, attempts to unify diverse classifications of business-to-business relational exchange (B2B RE) have been largely unsuccessful. We used 18 semi-structured, in-depth interviews with managers from a range of industries to explore the B2B RE construct. Analysis of the data revealed that B2B RE comprises five key dimensions. These are communication, understanding, commitment, trust and power symmetry. The research identifies the importance of personal interaction in business relationships and provides additional insights into the importance of affective commitment. In addition we uncover a number of negative consequences of affective commitment, which have been previously unexplored. This research contributes to the domain of B2B research by synthesising and advancing knowledge in this area to provide a new conceptual framework of B2B RE and directions for future research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Retrospective clinical data presents many challenges for data mining and machine learning. The transcription of patient records from paper charts and subsequent manipulation of data often results in high volumes of noise as well as a loss of other important information. In addition, such datasets often fail to represent expert medical knowledge and reasoning in any explicit manner. In this research we describe applying data mining methods to retrospective clinical data to build a prediction model for asthma exacerbation severity for pediatric patients in the emergency department. Difficulties in building such a model forced us to investigate alternative strategies for analyzing and processing retrospective data. This paper describes this process together with an approach to mining retrospective clinical data by incorporating formalized external expert knowledge (secondary knowledge sources) into the classification task. This knowledge is used to partition the data into a number of coherent sets, where each set is explicitly described in terms of the secondary knowledge source. Instances from each set are then classified in a manner appropriate for the characteristics of the particular set. We present our methodology and outline a set of experiential results that demonstrate some advantages and some limitations of our approach. © 2008 Springer-Verlag Berlin Heidelberg.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The proliferation of data throughout the strategic, tactical and operational areas within many organisations, has provided a need for the decision maker to be presented with structured information that is appropriate for achieving allocated tasks. However, despite this abundance of data, managers at all levels in the organisation commonly encounter a condition of ‘information overload’, that results in a paucity of the correct information. Specifically, this thesis will focus upon the tactical domain within the organisation and the information needs of management who reside at this level. In doing so, it will argue that the link between decision making at the tactical level in the organisation, and low-level transaction processing data, should be through a common object model that used a framework based upon knowledge leveraged from co-ordination theory. In order to achieve this, the Co-ordinated Business Object Model (CBOM) was created. Detailing a two-tier framework, the first tier models data based upon four interactive object models, namely, processes, activities, resources and actors. The second tier analyses the data captured by the four object models, and returns information that can be used to support tactical decision making. In addition, the Co-ordinated Business Object Support System (CBOSS), is a prototype tool that has been developed in order to both support the CBOM implementation, and to also demonstrate the functionality of the CBOM as a modelling approach for supporting tactical management decision making. Containing a graphical user interface, the system’s functionality allows the user to create and explore alternative implementations of an identified tactical level process. In order to validate the CBOM, three verification tests have been completed. The results provide evidence that the CBOM framework helps bridge the gap between low level transaction data, and the information that is used to support tactical level decision making.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of the research is to develop an e-business selection framework for small and medium enterprises (SMEs) by integrating established techniques in planning. The research is case based, comprising four case studies carried out in the printing industry for the purpose of evaluating the framework. Two of the companies are from Singapore, while the other two are from Guangzhou, China and Jinan, China respectively. To determine the need of an e-business selection framework for SMEs, extensive literature reviews were carried out in the area of e-business, business planning frameworks, SMEs and the printing industry. An e-business selection framework is then proposed by integrating the three established techniques of the Balanced Scorecard (BSC), Value Chain Analysis (VCA) and Quality Function Deployment (QFD). The newly developed selection framework is pilot tested using a published case study before actual evaluation is carried out in four case study companies. The case study methodology was chosen because of its ability to integrate diverse data collection techniques required to generate the BSC, VCA and QFD for the selection framework. The findings of the case studies revealed that the three techniques of BSC, VCA and QFD can be integrated seamlessly to complement on each other’s strengths in e-business planning. The eight-step methodology of the selection framework can provide SMEs with a step-by-step approach to e-business through structured planning. Also, the project has also provided better understanding and deeper insights into SMEs in the printing industry.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The original contribution of this work is threefold. Firstly, this thesis develops a critical perspective on current evaluation practice of business support, with focus on the timing of evaluation. The general time frame applied for business support policy evaluation is limited to one to two, seldom three years post intervention. This is despite calls for long-term impact studies by various authors, concerned about time lags before effects are fully realised. This desire for long-term evaluation opposes the requirements by policy-makers and funders, seeking quick results. Also, current ‘best practice’ frameworks do not refer to timing or its implications, and data availability affects the ability to undertake long-term evaluation. Secondly, this thesis provides methodological value for follow-up and similar studies by using data linking of scheme-beneficiary data with official performance datasets. Thus data availability problems are avoided through the use of secondary data. Thirdly, this thesis builds the evidence, through the application of a longitudinal impact study of small business support in England, covering seven years of post intervention data. This illustrates the variability of results for different evaluation periods, and the value in using multiple years of data for a robust understanding of support impact. For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the factor time. This analysis recommends a time frame of four to five years post intervention for soft business support evaluation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The original contribution of this work is threefold. Firstly, this thesis develops a critical perspective on current evaluation practice of business support, with focus on the timing of evaluation. The general time frame applied for business support policy evaluation is limited to one to two, seldom three years post intervention. This is despite calls for long-term impact studies by various authors, concerned about time lags before effects are fully realised. This desire for long-term evaluation opposes the requirements by policy-makers and funders, seeking quick results. Also, current ‘best practice’ frameworks do not refer to timing or its implications, and data availability affects the ability to undertake long-term evaluation. Secondly, this thesis provides methodological value for follow-up and similar studies by using data linking of scheme-beneficiary data with official performance datasets. Thus data availability problems are avoided through the use of secondary data. Thirdly, this thesis builds the evidence, through the application of a longitudinal impact study of small business support in England, covering seven years of post intervention data. This illustrates the variability of results for different evaluation periods, and the value in using multiple years of data for a robust understanding of support impact. For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the factor time. This analysis recommends a time frame of four to five years post intervention for soft business support evaluation.