48 resultados para ebXML (Electronic Business using eXtensible Markup Language)
em Aston University Research Archive
Resumo:
Despite the proliferation of e-business adoption by organisations and the world-wide growth of the e-business phenomenon, there is a paucity of empirical studies that examine the adoption of e-business in the Middle East. The aim of our study is to provide insights into the salient e-business adoption issues by focusing on Saudi Arabian businesses. We developed a conceptual model for electronic business (e-business) adoption incorporating ten factors based on the technology-organization-environment framework. Survey data from 550 businesses were used to test the model and hypotheses. We conducted confirmatory factor analysis to assess the reliability and validity of constructs. The findings of the study suggest that firm technology competence, size, top management Support, technology orientation, consumer readiness, trading partner readiness and regulatory support are important antecedents of e-business adoption and utilisation. In addition, the study finds that, competitive pressure and organisational customer and competitor orientation is not a predictor for e-business adoption and utilisation. The implications of the findings are discussed and suggestions for future inquiry are presented.
Resumo:
Our understanding of the nature of competitive advantage has not been helped by a tendency for theorists to adopt a unitary position, suggesting, for example, that advantage is industry based or resource based. In examining the nature of competitive advantage in an electronic business (e-business) environment this paper adopts a contingency perspective. Several intriguing questions emerge. Do 'new economy' companies have different resource profiles to 'old economy' companies? Are the patterns of resource development and accumulation different? Are attained advantages less sustainable for e-businesses? These are the kinds of themes examined in this paper. The literature on competitive advantage is reviewed as are the challenges posed by the recent changes in the business environment.Two broad sets of firms are identified as emerging out of the e-business shake up and the resource profiles of these firms are discussed. Several research propositions are advanced and the implications for research and practice are discussed.
Resumo:
Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture.
Resumo:
Models are central tools for modern scientists and decision makers, and there are many existing frameworks to support their creation, execution and composition. Many frameworks are based on proprietary interfaces, and do not lend themselves to the integration of models from diverse disciplines. Web based systems, or systems based on web services, such as Taverna and Kepler, allow composition of models based on standard web service technologies. At the same time the Open Geospatial Consortium has been developing their own service stack, which includes the Web Processing Service, designed to facilitate the executing of geospatial processing - including complex environmental models. The current Open Geospatial Consortium service stack employs Extensible Markup Language as a default data exchange standard, and widely-used encodings such as JavaScript Object Notation can often only be used when incorporated with Extensible Markup Language. Similarly, no successful engagement of the Web Processing Service standard with the well-supported technologies of Simple Object Access Protocol and Web Services Description Language has been seen. In this paper we propose a pure Simple Object Access Protocol/Web Services Description Language processing service which addresses some of the issues with the Web Processing Service specication and brings us closer to achieving a degree of interoperability between geospatial models, and thus realising the vision of a useful 'model web'.
Resumo:
Most object-based approaches to Geographical Information Systems (GIS) have concentrated on the representation of geometric properties of objects in terms of fixed geometry. In our road traffic marking application domain we have a requirement to represent the static locations of the road markings but also enforce the associated regulations, which are typically geometric in nature. For example a give way line of a pedestrian crossing in the UK must be within 1100-3000 mm of the edge of the crossing pattern. In previous studies of the application of spatial rules (often called 'business logic') in GIS emphasis has been placed on the representation of topological constraints and data integrity checks. There is very little GIS literature that describes models for geometric rules, although there are some examples in the Computer Aided Design (CAD) literature. This paper introduces some of the ideas from so called variational CAD models to the GIS application domain, and extends these using a Geography Markup Language (GML) based representation. In our application we have an additional requirement; the geometric rules are often changed and vary from country to country so should be represented in a flexible manner. In this paper we describe an elegant solution to the representation of geometric rules, such as requiring lines to be offset from other objects. The method uses a feature-property model embraced in GML 3.1 and extends the possible relationships in feature collections to permit the application of parameterized geometric constraints to sub features. We show the parametric rule model we have developed and discuss the advantage of using simple parametric expressions in the rule base. We discuss the possibilities and limitations of our approach and relate our data model to GML 3.1. © 2006 Springer-Verlag Berlin Heidelberg.
Resumo:
The study examines the concept of cultural determinism in relation to the business interview, analysing differences in language use between English, French and West German native speakers. The approach is multi- and inter-disciplinary combining linguistic and business research methodologies. An analytical model based on pragmatic and speech act theory is developed to analyse language use in telephone market research interviews. The model aims to evaluate behavioural differences between English, French and West German respondents in the interview situation. The empirical research is based on a telephone survey of industrial managers, conducted in the three countries in the national language of each country. The telephone interviews are transcribed and compared across languages to discover how managers from each country use different language functions to reply to questions and requests. These differences are assessed in terms of specific cultural parameters: politeness, self-assuredness and fullness of response. Empirical and descriptive studies of national character are compared with the survey results, providing the basis for an evaluation of the relationship between management culture and national culture on a contrastive and comparative cross-cultural basis. The project conclusions focus on the implications of the findings both for business interviewing and for language teaching.
Resumo:
Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system.
Resumo:
The international economic and business environment continues to develop at a rapid rate. Increasing interactions between economies, particularly between Europe and Asia, has raised many important issues regarding transport infrastructure, logistics and broader supply chain management. The potential exists to further stimulate trade provided that these issues are addressed in a logical and systematic manner. However, if this potential is to be realised in practice there is a need to re-evaluate current supply chain configurations. A mismatch currently exists between the technological capability and the supply chain or logistical reality. This mismatch has sharpened the focus on the need for robust approaches to supply chain re-engineering. Traditional approaches to business re-engineering have been based on manufacturing systems engineering and business process management. A recognition that all companies exist as part of bigger supply chains has fundamentally changed the focus of re-engineering. Inefficiencies anywhere in a supply chain result in the chain as a whole being unable to reach its true competitive potential. This reality, combined with the potentially radical impact on business and supply chain architectures of the technologies associated with electronic business, requires organisations to adopt innovative approaches to supply chain analysis and re-design. This paper introduces a systems approach to supply chain re-engineering which is aimed at addressing the challenges which the evolving business environment brings with it. The approach, which is based on work with a variety of both conventional and electronic supply chains, comprises underpinning principles, a methodology and guidelines on good working practice, as well as a suite of tools and techniques. The adoption of approaches such as that outlined in this paper helps to ensure that robust supply chains are designed and implemented in practice. This facilitates an integrated approach, with involvement of all key stakeholders throughout the design process.
Resumo:
This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.
Resumo:
INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.
Resumo:
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
Despite the proliferation of e-business adoption by organisations and the world-wide growth of the e-business phenomenon, there is a paucity of empirical studies that examine the adoption of e-business in the Middle East. The aim of our study is to provide insights into the salient e-business adoption issues by focusing on Saudi Arabian businesses. We developed a conceptual model for electronic business (e-business) adoption incorporating nine factors. Survey data from 550 businesses were used to test the model and hypotheses. The findings of the study suggest that firm's technological readiness, top management Support, technology orientation, consumer readiness, trading partner readiness and regulatory support are important facilitators of e-business adoption. In addition, the study finds that, competitive pressure and organisational customer and competitor orientation is not a predictor for e-business adoption. The implications of the findings are discussed and suggestions for future inquiry are presented.
Resumo:
This chapter reports on a framework that has been successfully used to analyze the e-business capabilities of an organization with a view to developing their e-capability maturity levels. This should be the first stage of any systems development project. The framework has been used widely within start-up companies and well-established companies both large and small; it has been deployed in the service and manufacturing sectors. It has been applied by practitioners and consultants to help improve e-business capability levels, and by academics for teaching and research purposes at graduate and undergraduate levels. This chapter will provide an account of the unique e-business planning and analysis framework (E-PAF) and demonstrate how it works via an abridged version of a case study (selected from hundreds that have been produced). This will include a brief account of the three techniques that are integrated to form the analysis framework: quality function deployment (QFD) (Akao, 1972), the balanced scorecard (BSC) (Kaplan & Norton, 1992), and value chain analysis (VCA) (Porter, 1985). The case study extract is based on an online community and dating agency service identified as VirtualCom which has been produced through a consulting assignment with the founding directors of that company and has not been published previously. It has been chosen because it gives a concise, comprehensive example from an industry that is relatively easy to relate to.