621 resultados para Dimensionality
Resumo:
Strategic planning and more specifically, the impact of strategic planning on organisational performance has been the subject of significant academic interest since the early 1970's. However, despite the significant amount of previous work examining the relationship between strategic planning and organisational performance, a comprehensive literature review identified a number of areas where contributions to the domain of study could be made. In overview, the main areas for further study identified from the literature review were a) a further examination of both the dimensionality and conceptualisation of strategic planning and organisational performance and b) a further, multivariate, examination of the relationship between strategic planning and performance, to capture the newly identified dimensionality. In addition to the previously identified strategic planning and organisational performance constructs, a comprehensive literature based assessment was undertaken and five main areas were identified for further examination, these were a) organisational b) comprehensive strategic choice, c) the quality of strategic options generated, d) political behavior and e) implementation success. From this, a conceptual model incorporating a set of hypotheses to be tested was formulated. In order to test the conceptual model specified and also the stated hypotheses, data gathering was undertaken. The quantitative phase of the research involved a mail survey of senior managers in medium to large UK based organisations, of which a total of 366 fully useable responses were received. Following rigorous individual construct validity and reliability testing, the complete conceptual model was tested using latent variable path analysis. The results for the individual hypotheses and also the complete conceptual model were most encouraging. The findings, theoretical and managerial implications, limitations and directions for future research are discussed.
Resumo:
For analysing financial time series two main opposing viewpoints exist, either capital markets are completely stochastic and therefore prices follow a random walk, or they are deterministic and consequently predictable. For each of these views a great variety of tools exist with which it can be tried to confirm the hypotheses. Unfortunately, these methods are not well suited for dealing with data characterised in part by both paradigms. This thesis investigates these two approaches in order to model the behaviour of financial time series. In the deterministic framework methods are used to characterise the dimensionality of embedded financial data. The stochastic approach includes here an estimation of the unconditioned and conditional return distributions using parametric, non- and semi-parametric density estimation techniques. Finally, it will be shown how elements from these two approaches could be combined to achieve a more realistic model for financial time series.
Resumo:
A preliminary study by Freeman et al (1996b) has suggested that when complex patterns of motion elicit impressions of 2-dimensionality, odd-item-out detection improves given targets can be differentiated on the basis of surface properties. Their results can be accounted for, it if is supposed that observers are permitted efficient access to 3-D surface descriptions but access to 2-D motion descriptions is restricted. To test the hypothesis, a standard search technique was employed, in which targets could be discussed on the basis of slant sign. In one experiment, slant impressions were induced through the summing of deformation and translation components. In a second theory were induced through the summing of shear and translation components. Neither showed any evidence of efficient access. A third experiment explored the possibility that access to these representations may have been hindered by a lack of grouping between the stimuli. Attempts to improve grouping failed to produce convincing evidence in support of life. An alternative explanation is that complex patterns of motion are simply not processed simultaneously. Psychophysical and physiological studies have, however, suggested that multiple mechanisms selective for complex motion do exist. Using a subthreshold summation technique I found evidence supporting the notion that complex motions are processed in parallel. Furthermore, in a spatial summation experiment, coherence thresholds were measured for displays containing different numbers of complex motion patches. Consistent with the idea that complex motion processing proceeds in parallel, increases in the number of motion patches were seen to decrease thresholds, both for expansion and rotation. Moreover, the rates of decrease were higher than those typically expected from probability summation, thus implying mechanisms are available, which can pool signals from spatially distinct complex motion flows.
Resumo:
Electronic commerce (e-commerce) has become an increasingly important initiative among organisations. The factors affecting adoption decisions have been well-documented, but there is a paucity of empirical studies that examine the adoption of e-commerce in developing economies in the Arab world. The aim of this study is to provide insights into the salient e-commerce adoption issues by focusing on Saudi Arabian businesses. Based on the Technology-Organisational-Environmental framework, an integrated research model was developed that explains the relative influence of 19 known determinants. A measurement scale was developed from prior empirical studies and revised based on feedback from the pilot study. Non-interactive adoption, interactive adoption and stabilisation of e-commerce adoption were empirically investigated using survey data collected from Saudi manufacturing and service companies. Multiple discriminant function analysis (MDFA) was used to analyse the data and research hypotheses. The analysis demonstrates that (1) regarding the non-interactive adoption of e-commerce, IT readiness, management team support, learning orientation, strategic orientation, pressure from business partner, regulatory and legal environment, technology consultants‘ participation and economic downturn are the most important factors, (2) when e-commerce interactive adoption is investigated, IT readiness, management team support, regulatory environment and technology consultants‘ participation emerge as the strongest drivers, (3) pressure from customers may not have much effect on the non-interactive adoption of e-commerce by companies, but does significantly influence the stabilisation of e-commerce use by firms, and (4) Saudi Arabia has a strong ICT infrastructure for supporting e-commerce practices. Taken together, these findings on the multi-dimensionality of e-commerce adoption show that non-interactive adoption, interactive adoption and stabilisation of e-commerce are not only different measures of e-commerce adoption, but also have different determinants. Findings from this study may be valuable for both policy and practice as it can offer a substantial understanding of the factors that enhance the widespread use of B2B e-commerce. Also, the integrated model provides a more comprehensive explanation of e-commerce adoption in organisations and could serve as a foundation for future research on information systems.
Resumo:
Business-to-business (B2B) electronic commerce (e-commerce) has become an increasingly important initiative among organisations. The factors affecting the adoption decisions have been well-documented but there is a paucity of empirical studies that examine the adoption of e-commerce in developing economies in the Arab world. The aim of our study is to provide insights into the salient e-commerce adoption issues by focusing on Saudi Arabian businesses. We developed a conceptual model for B2B e-commerce adoption incorporating six factors. Survey data from 450 businesses were used to test the model and hypotheses. The analysis demonstrates that, (1) when e-commerce preliminary adoption is investigated, organizational IT readiness, management support and regulatory environment emerge as the strongest factor, (2) pressure from customers may not have much effect on the preliminary adoption of e-commerce by companies, but does significantly influence on the utilisation of e-commerce by firms, and (3) Saudi Arabia has a strong ICT infrastructure for supporting e-commerce practices. Taken together, these findings on the multi-dimensionality of e-commerce adoption show that preliminary adoption and utilisation of ecommerce are not only different measures of ecommerce adoption, but also have different determinants. The implications of the findings are discussed and suggestions for future inquiry are presented.
Resumo:
Here, we report on the first application of high-pressure XPS (HP-XPS) to the surface catalyzed selective oxidation of a hydrocarbon over palladium, wherein the reactivity of metal and oxide surfaces in directing the oxidative dehydrogenation of crotyl alcohol (CrOH) to crotonaldehyde (CrHCO) is evaluated. Crotonaldehyde formation is disfavored over Pd(111) under all reaction conditions, with only crotyl alcohol decomposition observed. In contrast, 2D Pd5O4 and 3D PdO overlayers are able to selectively oxidize crotyl alcohol (1 mTorr) to crotonaldehyde in the presence of co-fed oxygen (140 mTorr) at temperatures as low as 40 °C. However, 2D Pd5O4 ultrathin films are unstable toward reduction by the alcohol at ambient temperature, whereas the 3D PdO oxide is able to sustain catalytic crotonaldehyde production even up to 150 °C. Co-fed oxygen is essential to stabilize palladium surface oxides toward in situ reduction by crotyl alcohol, with stability increasing with oxide film dimensionality.
Resumo:
This paper introduces a new technique for optimizing the trading strategy of brokers that autonomously trade in re- tail and wholesale markets. Simultaneous optimization of re- tail and wholesale strategies has been considered by existing studies as intractable. Therefore, each of these strategies is optimized separately and their interdependence is generally ignored, with resulting broker agents not aiming for a glob- ally optimal retail and wholesale strategy. In this paper, we propose a novel formalization, based on a semi-Markov deci- sion process (SMDP), which globally and simultaneously op- timizes retail and wholesale strategies. The SMDP is solved using hierarchical reinforcement learning (HRL) in multi- agent environments. To address the curse of dimensionality, which arises when applying SMDP and HRL to complex de- cision problems, we propose an ecient knowledge transfer approach. This enables the reuse of learned trading skills in order to speed up the learning in new markets, at the same time as making the broker transportable across market envi- ronments. The proposed SMDP-broker has been thoroughly evaluated in two well-established multi-agent simulation en- vironments within the Trading Agent Competition (TAC) community. Analysis of controlled experiments shows that this broker can outperform the top TAC-brokers. More- over, our broker is able to perform well in a wide range of environments by re-using knowledge acquired in previously experienced settings.
Resumo:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
Resumo:
Dimensionality reduction is a very important step in the data mining process. In this paper, we consider feature extraction for classification tasks as a technique to overcome problems occurring because of “the curse of dimensionality”. Three different eigenvector-based feature extraction approaches are discussed and three different kinds of applications with respect to classification tasks are considered. The summary of obtained results concerning the accuracy of classification schemes is presented with the conclusion about the search for the most appropriate feature extraction method. The problem how to discover knowledge needed to integrate the feature extraction and classification processes is stated. A decision support system to aid in the integration of the feature extraction and classification processes is proposed. The goals and requirements set for the decision support system and its basic structure are defined. The means of knowledge acquisition needed to build up the proposed system are considered.
Resumo:
Computer simulators of real-world processes are often computationally expensive and require many inputs. The problem of the computational expense can be handled using emulation technology; however, highly multidimensional input spaces may require more simulator runs to train and validate the emulator. We aim to reduce the dimensionality of the problem by screening the simulators inputs for nonlinear effects on the output rather than distinguishing between negligible and active effects. Our proposed method is built upon the elementary effects (EE) method for screening and uses a threshold value to separate the inputs with linear and nonlinear effects. The technique is simple to implement and acts in a sequential way to keep the number of simulator runs down to a minimum, while identifying the inputs that have nonlinear effects. The algorithm is applied on a set of simulated examples and a rabies disease simulator where we observe run savings ranging between 28% and 63% compared with the batch EE method. Supplementary materials for this article are available online.
Resumo:
We present a test for identifying clusters in high dimensional data based on the k-means algorithm when the null hypothesis is spherical normal. We show that projection techniques used for evaluating validity of clusters may be misleading for such data. In particular, we demonstrate that increasingly well-separated clusters are identified as the dimensionality increases, when no such clusters exist. Furthermore, in a case of true bimodality, increasing the dimensionality makes identifying the correct clusters more difficult. In addition to the original conservative test, we propose a practical test with the same asymptotic behavior that performs well for a moderate number of points and moderate dimensionality. ACM Computing Classification System (1998): I.5.3.
Resumo:
2000 Mathematics Subject Classification: 62P10, 92D10, 92D30, 94A17, 62L10.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.
Resumo:
Several brand identity frameworks have been published in the B2C and the B2B brand marketing literature. A reliable, valid and parsimonious service brand identity scale that empirically establishes the construct's dimensionality in a B2B market has yet to be developed. This paper reports the findings of a study conducted amongst 421 senior executives working in the UK IT Service sector to develop and validate a B2B Service Brand Identity Scale. Following established scale development procedures support is provided for a B2B Service Brand Identity Scale comprising five dimensions; employee and client focus, visual identity, brand personality, consistent communications and human resource initiatives. Concluding remarks discuss theoretical and managerial implications with limitations and directions for future research. © 2011 Elsevier Inc.