27 resultados para genetic algorithm-kernel partial least squares
em Universidad Politécnica de Madrid
Resumo:
There is now an emerging need for an efficient modeling strategy to develop a new generation of monitoring systems. One method of approaching the modeling of complex processes is to obtain a global model. It should be able to capture the basic or general behavior of the system, by means of a linear or quadratic regression, and then superimpose a local model on it that can capture the localized nonlinearities of the system. In this paper, a novel method based on a hybrid incremental modeling approach is designed and applied for tool wear detection in turning processes. It involves a two-step iterative process that combines a global model with a local model to take advantage of their underlying, complementary capacities. Thus, the first step constructs a global model using a least squares regression. A local model using the fuzzy k-nearest-neighbors smoothing algorithm is obtained in the second step. A comparative study then demonstrates that the hybrid incremental model provides better error-based performance indices for detecting tool wear than a transductive neurofuzzy model and an inductive neurofuzzy model.
Resumo:
The data acquired by Remote Sensing systems allow obtaining thematic maps of the earth's surface, by means of the registered image classification. This implies the identification and categorization of all pixels into land cover classes. Traditionally, methods based on statistical parameters have been widely used, although they show some disadvantages. Nevertheless, some authors indicate that those methods based on artificial intelligence, may be a good alternative. Thus, fuzzy classifiers, which are based on Fuzzy Logic, include additional information in the classification process through based-rule systems. In this work, we propose the use of a genetic algorithm (GA) to select the optimal and minimum set of fuzzy rules to classify remotely sensed images. Input information of GA has been obtained through the training space determined by two uncorrelated spectral bands (2D scatter diagrams), which has been irregularly divided by five linguistic terms defined in each band. The proposed methodology has been applied to Landsat-TM images and it has showed that this set of rules provides a higher accuracy level in the classification process
Resumo:
An aerodynamic optimization of the train aerodynamic characteristics in term of front wind action sensitivity is carried out in this paper. In particular, a genetic algorithm (GA) is used to perform a shape optimization study of a high-speed train nose. The nose is parametrically defined via Bézier Curves, including a wider range of geometries in the design space as possible optimal solutions. Using a GA, the main disadvantage to deal with is the large number of evaluations need before finding such optimal. Here it is proposed the use of metamodels to replace Navier-Stokes solver. Among all the posibilities, Rsponse Surface Models and Artificial Neural Networks (ANN) are considered. Best results of prediction and generalization are obtained with ANN and those are applied in GA code. The paper shows the feasibility of using GA in combination with ANN for this problem, and solutions achieved are included.
Resumo:
Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain).
Resumo:
Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.
Resumo:
An aerodynamic optimization of the ICE 2 high-speed train nose in term of front wind action sensitivity is carried out in this paper. The nose is parametrically defined by Be?zier Curves, and a three-dimensional representation of the nose is obtained using thirty one design variables. This implies a more complete parametrization, allowing the representation of a real model. In order to perform this study a genetic algorithm (GA) is used. Using a GA involves a large number of evaluations before finding such optimal. Hence it is proposed the use of metamodels or surrogate models to replace Navier-Stokes solver and speed up the optimization process. Adaptive sampling is considered to optimize surrogate model fitting and minimize computational cost when dealing with a very large number of design parameters. The paper introduces the feasi- bility of using GA in combination with metamodels for real high-speed train geometry optimization.
Resumo:
Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)
Resumo:
We analyse a class of estimators of the generalized diffusion coefficient for fractional Brownian motion Bt of known Hurst index H, based on weighted functionals of the single time square displacement. We show that for a certain choice of the weight function these functionals possess an ergodic property and thus provide the true, ensemble-averaged, generalized diffusion coefficient to any necessary precision from a single trajectory data, but at expense of a progressively higher experimental resolution. Convergence is fastest around H ? 0.30, a value in the subdiffusive regime.
Resumo:
Heuristic methods are popular tools to find critical slip surfaces in slope stability analyses. A new genetic algorithm (GA) is proposed in this work that has a standard structure but a novel encoding and generation of individuals with custom-designed operators for mutation and crossover that produce kinematically feasible slip surfaces with a high probability. In addition, new indices to assess the efficiency of operators in their search for the minimum factor of safety (FS) are proposed. The proposed GA is applied to traditional benchmark examples from the literature, as well as to a new practical example. Results show that the proposed GA is reliable, flexible and robust: it provides good minimum FS estimates that are not very sensitive to the number of nodes and that are very similar for different replications
Resumo:
The present research is focused on the application of hyperspectral images for the supervision of quality deterioration in ready to use leafy spinach during storage (Spinacia oleracea). Two sets of samples of packed leafy spinach were considered: (a) a first set of samples was stored at 20 °C (E-20) in order to accelerate the degradation process, and these samples were measured the day of reception in the laboratory and after 2 days of storage; (b) a second set of samples was kept at 10 °C (E-10), and the measurements were taken throughout storage, beginning the day of reception and repeating the acquisition of Images 3, 6 and 9 days later. Twenty leaves per test were analyzed. Hyperspectral images were acquired with a push-broom CCD camera equipped with a spectrograph VNIR (400–1000 nm). Calibration set of spectra was extracted from E-20 samples, containing three classes of degradation: class A (optimal quality), class B and class C (maximum deterioration). Reference average spectra were defined for each class. Three models, computed on the calibration set, with a decreasing degree of complexity were compared, according to their ability for segregating leaves at different quality stages (fresh, with incipient and non-visible symptoms of degradation, and degraded): spectral angle mapper distance (SAM), partial least squares discriminant analysis models (PLS-DA), and a non linear index (Leafy Vegetable Evolution, LEVE) combining five wavelengths were included among the previously selected by CovSel procedure. In sets E-10 and E-20, artificial images of the membership degree according to the distance of each pixel to the reference classes, were computed assigning each pixel to the closest reference class. The three methods were able to show the degradation of the leaves with storage time.
Resumo:
So far, the majority of reports on on-line measurement considered soil properties with direct spectral responses in near infrared spectroscopy (NIRS). This work reports on the results of on-line measurement of soil properties with indirect spectral responses, e.g. pH, cation exchange capacity (CEC), exchangeable calcium (Caex) and exchangeable magnesium (Mgex) in one field in Bedfordshire in the UK. The on-line sensor consisted of a subsoiler coupled with an AgroSpec mobile, fibre type, visible and near infrared (vis–NIR) spectrophotometer (tec5 Technology for Spectroscopy, Germany), with a measurement range 305–2200 nm to acquire soil spectra in diffuse reflectance mode. General calibration models for the studied soil properties were developed with a partial least squares regression (PLSR) with one-leave-out cross validation, using spectra measured under non-mobile laboratory conditions of 160 soil samples collected from different fields in four farms in Europe, namely, Czech Republic, Denmark, Netherland and UK. A group of 25 samples independent from the calibration set was used as independent validation set. Higher accuracy was obtained for laboratory scanning as compared to on-line scanning of the 25 independent samples. The prediction accuracy for the laboratory and on-line measurements was classified as excellent/very good for pH (RPD = 2.69 and 2.14 and r2 = 0.86 and 0.78, respectively), and moderately good for CEC (RPD = 1.77 and 1.61 and r2 = 0.68 and 0.62, respectively) and Mgex (RPD = 1.72 and 1.49 and r2 = 0.66 and 0.67, respectively). For Caex, very good accuracy was calculated for laboratory method (RPD = 2.19 and r2 = 0.86), as compared to the poor accuracy reported for the on-line method (RPD = 1.30 and r2 = 0.61). The ability of collecting large number of data points per field area (about 12,800 point per 21 ha) and the simultaneous analysis of several soil properties without direct spectral response in the NIR range at relatively high operational speed and appreciable accuracy, encourage the recommendation of the on-line measurement system for site specific fertilisation.
Resumo:
El comercio electrónico ha experimentado un fuerte crecimiento en los últimos años, favorecido especialmente por el aumento de las tasas de penetración de Internet en todo el mundo. Sin embargo, no todos los países están evolucionando de la misma manera, con un espectro que va desde las naciones pioneras en desarrollo de tecnologías de la información y comunicaciones, que cuentan con una elevado porcentaje de internautas y de compradores online, hasta las rezagadas de rápida adopción en las que, pese a contar con una menor penetración de acceso, presentan una alta tasa de internautas compradores. Entre ambos extremos se encuentran países como España que, aunque alcanzó hace años una tasa considerable de penetración de usuarios de Internet, no ha conseguido una buena tasa de transformación de internautas en compradores. Pese a que el comercio electrónico ha experimentado importantes aumentos en los últimos años, sus tasas de crecimiento siguen estando por debajo de países con características socio-económicas similares. Para intentar conocer las razones que afectan a la adopción del comercio por parte de los compradores, la investigación científica del fenómeno ha empleado diferentes enfoques teóricos. De entre todos ellos ha destacado el uso de los modelos de adopción, proveniente de la literatura de adopción de sistemas de información en entornos organizativos. Estos modelos se basan en las percepciones de los compradores para determinar qué factores pueden predecir mejor la intención de compra y, en consecuencia, la conducta real de compra de los usuarios. Pese a que en los últimos años han proliferado los trabajos de investigación que aplican los modelos de adopción al comercio electrónico, casi todos tratan de validar sus hipótesis mediante el análisis de muestras de consumidores tratadas como un único conjunto, y del que se obtienen conclusiones generales. Sin embargo, desde el origen del marketing, y en especial a partir de la segunda mitad del siglo XIX, se considera que existen diferencias en el comportamiento de los consumidores, que pueden ser debidas a características demográficas, sociológicas o psicológicas. Estas diferencias se traducen en necesidades distintas, que sólo podrán ser satisfechas con una oferta adaptada por parte de los vendedores. Además, por contar el comercio electrónico con unas características particulares que lo diferencian del comercio tradicional –especialmente por la falta de contacto físico entre el comprador y el producto– a las diferencias en la adopción para cada consumidor se le añaden las diferencias derivadas del tipo de producto adquirido, que si bien habían sido consideradas en el canal físico, en el comercio electrónico cobran especial relevancia. A la vista de todo ello, el presente trabajo pretende abordar el estudio de los factores determinantes de la intención de compra y la conducta real de compra en comercio electrónico por parte del consumidor final español, teniendo en cuenta el tipo de segmento al que pertenezca dicho comprador y el tipo de producto considerado. Para ello, el trabajo contiene ocho apartados entre los que se encuentran cuatro bloques teóricos y tres bloques empíricos, además de las conclusiones. Estos bloques dan lugar a los siguientes ocho capítulos por orden de aparición en el trabajo: introducción, situación del comercio electrónico, modelos de adopción de tecnología, segmentación en comercio electrónico, diseño previo del trabajo empírico, diseño de la investigación, análisis de los resultados y conclusiones. El capítulo introductorio justifica la relevancia de la investigación, además de fijar los objetivos, la metodología y las fases seguidas para el desarrollo del trabajo. La justificación se complementa con el segundo capítulo, que cuenta con dos elementos principales: en primer lugar se define el concepto de comercio electrónico y se hace una breve retrospectiva desde sus orígenes hasta la situación actual en un contexto global; en segundo lugar, el análisis estudia la evolución del comercio electrónico en España, mostrando su desarrollo y situación presente a partir de sus principales indicadores. Este apartado no sólo permite conocer el contexto de la investigación, sino que además permite contrastar la relevancia de la muestra utilizada en el presente estudio con el perfil español respecto al comercio electrónico. Los capítulos tercero –modelos de adopción de tecnologías– y cuarto –segmentación en comercio electrónico– sientan las bases teóricas necesarias para abordar el estudio. En el capítulo tres se hace una revisión general de la literatura de modelos de adopción de tecnología y, en particular, de los modelos de adopción empleados en el ámbito del comercio electrónico. El resultado de dicha revisión deriva en la construcción de un modelo adaptado basado en los modelos UTAUT (Unified Theory of Acceptance and Use of Technology, Teoría unificada de la aceptación y el uso de la tecnología) y UTAUT2, combinado con dos factores específicos de adopción del comercio electrónico: el riesgo percibido y la confianza percibida. Por su parte, en el capítulo cuatro se revisan las metodologías de segmentación de clientes y productos empleadas en la literatura. De dicha revisión se obtienen un amplio conjunto de variables de las que finalmente se escogen nueve variables de clasificación que se consideran adecuadas tanto por su adaptación al contexto del comercio electrónico como por su adecuación a las características de la muestra empleada para validar el modelo. Las nueve variables se agrupan en tres conjuntos: variables de tipo socio-demográfico –género, edad, nivel de estudios, nivel de ingresos, tamaño de la unidad familiar y estado civil–, de comportamiento de compra – experiencia de compra por Internet y frecuencia de compra por Internet– y de tipo psicográfico –motivaciones de compra por Internet. La segunda parte del capítulo cuatro se dedica a la revisión de los criterios empleados en la literatura para la clasificación de los productos en el contexto del comercio electrónico. De dicha revisión se obtienen quince grupos de variables que pueden tomar un total de treinta y cuatro valores, lo que deriva en un elevado número de combinaciones posibles. Sin embargo, pese a haber sido utilizados en el contexto del comercio electrónico, no en todos los casos se ha comprobado la influencia de dichas variables respecto a la intención de compra o la conducta real de compra por Internet; por este motivo, y con el objetivo de definir una clasificación robusta y abordable de tipos de productos, en el capitulo cinco se lleva a cabo una validación de las variables de clasificación de productos mediante un experimento previo con 207 muestras. Seleccionando sólo aquellas variables objetivas que no dependan de la interpretación personal del consumidores y que determinen grupos significativamente distintos respecto a la intención y conducta de compra de los consumidores, se obtiene un modelo de dos variables que combinadas dan lugar a cuatro tipos de productos: bien digital, bien no digital, servicio digital y servicio no digital. Definidos el modelo de adopción y los criterios de segmentación de consumidores y productos, en el sexto capítulo se desarrolla el modelo completo de investigación formado por un conjunto de hipótesis obtenidas de la revisión de la literatura de los capítulos anteriores, en las que se definen las hipótesis de investigación con respecto a las influencias esperadas de las variables de segmentación sobre las relaciones del modelo de adopción. Este modelo confiere a la investigación un carácter social y de tipo fundamentalmente exploratorio, en el que en muchos casos ni siquiera se han encontrado evidencias empíricas previas que permitan el enunciado de hipótesis sobre la influencia de determinadas variables de segmentación. El capítulo seis contiene además la descripción del instrumento de medida empleado en la investigación, conformado por un total de 125 preguntas y sus correspondientes escalas de medida, así como la descripción de la muestra representativa empleada en la validación del modelo, compuesta por un grupo de 817 personas españolas o residentes en España. El capítulo siete constituye el núcleo del análisis empírico del trabajo de investigación, que se compone de dos elementos fundamentales. Primeramente se describen las técnicas estadísticas aplicadas para el estudio de los datos que, dada la complejidad del análisis, se dividen en tres grupos fundamentales: Método de mínimos cuadrados parciales (PLS, Partial Least Squares): herramienta estadística de análisis multivariante con capacidad de análisis predictivo que se emplea en la determinación de las relaciones estructurales de los modelos propuestos. Análisis multigrupo: conjunto de técnicas que permiten comparar los resultados obtenidos con el método PLS entre dos o más grupos derivados del uso de una o más variables de segmentación. En este caso se emplean cinco métodos de comparación, lo que permite asimismo comparar los rendimientos de cada uno de los métodos. Determinación de segmentos no identificados a priori: en el caso de algunas de las variables de segmentación no existe un criterio de clasificación definido a priori, sino que se obtiene a partir de la aplicación de técnicas estadísticas de clasificación. En este caso se emplean dos técnicas fundamentales: análisis de componentes principales –dado el elevado número de variables empleadas para la clasificación– y análisis clúster –del que se combina una técnica jerárquica que calcula el número óptimo de segmentos, con una técnica por etapas que es más eficiente en la clasificación, pero exige conocer el número de clústeres a priori. La aplicación de dichas técnicas estadísticas sobre los modelos resultantes de considerar los distintos criterios de segmentación, tanto de clientes como de productos, da lugar al análisis de un total de 128 modelos de adopción de comercio electrónico y 65 comparaciones multigrupo, cuyos resultados y principales consideraciones son elaboradas a lo largo del capítulo. Para concluir, el capítulo ocho recoge las conclusiones del trabajo divididas en cuatro partes diferenciadas. En primer lugar se examina el grado de alcance de los objetivos planteados al inicio de la investigación; después se desarrollan las principales contribuciones que este trabajo aporta tanto desde el punto de vista metodológico, como desde los punto de vista teórico y práctico; en tercer lugar, se profundiza en las conclusiones derivadas del estudio empírico, que se clasifican según los criterios de segmentación empleados, y que combinan resultados confirmatorios y exploratorios; por último, el trabajo recopila las principales limitaciones de la investigación, tanto de carácter teórico como empírico, así como aquellos aspectos que no habiendo podido plantearse dentro del contexto de este estudio, o como consecuencia de los resultados alcanzados, se presentan como líneas futuras de investigación. ABSTRACT Favoured by an increase of Internet penetration rates across the globe, electronic commerce has experienced a rapid growth over the last few years. Nevertheless, adoption of electronic commerce has differed from one country to another. On one hand, it has been observed that countries leading e-commerce adoption have a large percentage of Internet users as well as of online purchasers; on the other hand, other markets, despite having a low percentage of Internet users, show a high percentage of online buyers. Halfway between those two ends of the spectrum, we find countries such as Spain which, despite having moderately high Internet penetration rates and similar socio-economic characteristics as some of the leading countries, have failed to turn Internet users into active online buyers. Several theoretical approaches have been taken in an attempt to define the factors that influence the use of electronic commerce systems by customers. One of the betterknown frameworks to characterize adoption factors is the acceptance modelling theory, which is derived from the information systems adoption in organizational environments. These models are based on individual perceptions on which factors determine purchase intention, as a mean to explain users’ actual purchasing behaviour. Even though research on electronic commerce adoption models has increased in terms of volume and scope over the last years, the majority of studies validate their hypothesis by using a single sample of consumers from which they obtain general conclusions. Nevertheless, since the birth of marketing, and more specifically from the second half of the 19th century, differences in consumer behaviour owing to demographic, sociologic and psychological characteristics have also been taken into account. And such differences are generally translated into different needs that can only be satisfied when sellers adapt their offer to their target market. Electronic commerce has a number of features that makes it different when compared to traditional commerce; the best example of this is the lack of physical contact between customers and products, and between customers and vendors. Other than that, some differences that depend on the type of product may also play an important role in electronic commerce. From all the above, the present research aims to address the study of the main factors influencing purchase intention and actual purchase behaviour in electronic commerce by Spanish end-consumers, taking into consideration both the customer group to which they belong and the type of product being purchased. In order to achieve this goal, this Thesis is structured in eight chapters: four theoretical sections, three empirical blocks and a final section summarizing the conclusions derived from the research. The chapters are arranged in sequence as follows: introduction, current state of electronic commerce, technology adoption models, electronic commerce segmentation, preliminary design of the empirical work, research design, data analysis and results, and conclusions. The introductory chapter offers a detailed justification of the relevance of this study in the context of e-commerce adoption research; it also sets out the objectives, methodology and research stages. The second chapter further expands and complements the introductory chapter, focusing on two elements: the concept of electronic commerce and its evolution from a general point of view, and the evolution of electronic commerce in Spain and main indicators of adoption. This section is intended to allow the reader to understand the research context, and also to serve as a basis to justify the relevance and representativeness of the sample used in this study. Chapters three (technology acceptance models) and four (segmentation in electronic commerce) set the theoretical foundations for the study. Chapter 3 presents a thorough literature review of technology adoption modelling, focusing on previous studies on electronic commerce acceptance. As a result of the literature review, the research framework is built upon a model based on UTAUT (Unified Theory of Acceptance and Use of Technology) and its evolution, UTAUT2, including two specific electronic commerce adoption factors: perceived risk and perceived trust. Chapter 4 deals with client and product segmentation methodologies used by experts. From the literature review, a wide range of classification variables is studied, and a shortlist of nine classification variables has been selected for inclusion in the research. The criteria for variable selection were their adequacy to electronic commerce characteristics, as well as adequacy to the sample characteristics. The nine variables have been classified in three groups: socio-demographic (gender, age, education level, income, family size and relationship status), behavioural (experience in electronic commerce and frequency of purchase) and psychographic (online purchase motivations) variables. The second half of chapter 4 is devoted to a review of the product classification criteria in electronic commerce. The review has led to the identification of a final set of fifteen groups of variables, whose combination offered a total of thirty-four possible outputs. However, due to the lack of empirical evidence in the context of electronic commerce, further investigation on the validity of this set of product classifications was deemed necessary. For this reason, chapter 5 proposes an empirical study to test the different product classification variables with 207 samples. A selection of product classifications including only those variables that are objective, able to identify distinct groups and not dependent on consumers’ point of view, led to a final classification of products which consisted on two groups of variables for the final empirical study. The combination of these two groups gave rise to four types of products: digital and non-digital goods, and digital and non-digital services. Chapter six characterizes the research –social, exploratory research– and presents the final research model and research hypotheses. The exploratory nature of the research becomes patent in instances where no prior empirical evidence on the influence of certain segmentation variables was found. Chapter six also includes the description of the measurement instrument used in the research, consisting of a total of 125 questions –and the measurement scales associated to each of them– as well as the description of the sample used for model validation (consisting of 817 Spanish residents). Chapter 7 is the core of the empirical analysis performed to validate the research model, and it is divided into two separate parts: description of the statistical techniques used for data analysis, and actual data analysis and results. The first part is structured in three different blocks: Partial Least Squares Method (PLS): the multi-variable analysis is a statistical method used to determine structural relationships of models and their predictive validity; Multi-group analysis: a set of techniques that allow comparing the outcomes of PLS analysis between two or more groups, by using one or more segmentation variables. More specifically, five comparison methods were used, which additionally gives the opportunity to assess the efficiency of each method. Determination of a priori undefined segments: in some cases, classification criteria did not necessarily exist for some segmentation variables, such as customer motivations. In these cases, the application of statistical classification techniques is required. For this study, two main classification techniques were used sequentially: principal component factor analysis –in order to reduce the number of variables– and cluster analysis. The application of the statistical methods to the models derived from the inclusion of the various segmentation criteria –for both clients and products–, led to the analysis of 128 different electronic commerce adoption models and 65 multi group comparisons. Finally, chapter 8 summarizes the conclusions from the research, divided into four parts: first, an assessment of the degree of achievement of the different research objectives is offered; then, methodological, theoretical and practical implications of the research are drawn; this is followed by a discussion on the results from the empirical study –based on the segmentation criteria for the research–; fourth, and last, the main limitations of the research –both empirical and theoretical– as well as future avenues of research are detailed.
Resumo:
Durante décadas y aun en la actualidad muchas organizaciones, a nivel mundial, continúan afrontando pérdidas significativas debido a fracasos parciales y totales respecto a sus inversiones en sistemas de información (SI), planteando serios retos a los niveles gerenciales y los profesionales de SI. Estadísticas alarmantes y décadas de experiencia en la praxis en el área de SI en diversas organizaciones llevan al autor a poner el énfasis en los usuarios finales internos (UF) que son designados como representantes (UFR) de sus pares en los proyectos de desarrollo de SI (PDSI) por considerarlos como factores influyentes de manera significativa en el problema. Particularmente, con enfoque en ciertos factores de los UFR críticos para el éxito de los PDSI, con dimensiones analizadas de forma aislada o incompleta en otros estudios empíricos, a la fecha. No se encontraron estudios en Latinoamérica ni en otras latitudes que abordasen el fenómeno del éxito/fracaso de los SI desde el punto de vista adoptado en esta tesis. Por ello, esta investigación empírica ha evaluado en qué grado estos factores pudiesen influenciar los resultados durante el desarrollo e implementación de SI y su posible impacto en la satisfacción de los UF, siendo esta última aceptada por variados autores como la principal medida del éxito de los SI. Este estudio fue realizado en América Latina en las cuatro grandes empresas industriales que integran verticalmente el sector aluminio de Venezuela, sometidas a un macro PDSI para instalar el paquete, de tipo ERP, SAP/R3. Experimentados profesionales fueron encuestados o entrevistados, tales como altos ejecutivos, desarrolladores, líderes de proyecto y líderes de los UF. Un enfoque metodológico de triangulación permitió combinar un análisis cuantitativo con un análisis cualitativo interpretativo del tipo hermenéutico/dialéctico, hallándose resultados convergentes y complementarios. Un análisis estadístico, utilizando Partial Least Squares (PLS), seguido de un análisis hermenéutico/dialéctico. Los resultados confirmaron un hecho importante: en los casos problemáticos, paradójicamente, los orígenes de las razones de rechazo de esos SI argumentadas por los UF, en alto grado, apuntaron a los UFR o a ellos mismos. Los resultados también confirmaron la prevalencia de factores de orden cognitivo, conductual y político en estas organizaciones sobre los tecnológicos, al igual que el alto riesgo de dar por sentado la presencia y calidad de los factores requeridos de los UFR y de los otros factores estudiados. La validación estadística del modelo propuesto reveló al constructo conocimientos de los UFR como la principal variable latente, con los variables indicadoras que componen este constructo ejerciendo la mayor influencia sobre la calidad y el éxito de los SI. Un hallazgo contrario al de otros estudios, mostró que los conocimientos sobre las tecnologías de la información (TI) fueron los menos relevantes. Los SI de nómina y de administración de los RRHH fueron los más problemáticos, como suele ser el caso, por su complejidad en organizaciones grandes. Las conclusiones principales confirman el decisivo rol de los UF para el éxito de los PDSI y su relación con la creciente problemática planteada, la cual amerita más investigación y de las organizaciones una mayor atención y preparación. Descuidar los factores humanos y sociales así como una efectiva planificación y gestión de los mismos en preparación para estos proyectos origina serios riesgos. No obstante las limitaciones de este trabajo, la problemática analizada suele influir en los PDSI en diversas organizaciones, indistintamente de su tamaño o tipo de SI, estimándose, por tanto, que los resultados, conclusiones y recomendaciones de esta investigación tienen un alto grado de generalización. Una relación de indicadores claves es suministrada con fines preventivos. Finalmente, los factores evaluados pueden usarse para ampliar el modelo reconocido de DeLone y McLean (2003), conectándolos como variables latentes de sus variables independientes calidad de la información y calidad del SI. ABSTRACT For decades, many organizations worldwide have been enduring heavy losses due to partial and total failures regarding their investments in information systems (IS), posing serious challenges to all management levels and IS practitioners. Alarming statistics in this regard and decades of practice in the IS area lead the author to place an emphasis on the end users (EU) who are appointed in representation of their peers (EUR) to IS development projects (ISDP), considering them as highly influential factors on the problem. Especially, focusing on certain EUR success factors, and their dimensions, deemed critical to any IS development and implementation, omitted or not thoroughly analyzed neither in the theory nor in the empirical research on the subject, so far. No studies were found in Latin America or elsewhere addressing the phenomenon of IS success/failure from the perspective presented herein. Hence, this empirical research has assessed to what degree such factors can influence the outcomes of an ISDP and their feasible impact on the EU´s satisfaction, being the latter accepted by several authors as the main measure of IS success. This study was performed in Latin America embracing four major industrial enterprises, which vertically integrate the aluminum sector of Venezuela, subjected to a macro ISDP to install the ERP-type package SAP/R3. The field work included surveying and interviewing experienced professionals such as IS executives, IS developers, IS project leaders and end-user project leaders. A triangulation methodological approach allowed combining quantitative and interpretive analyses, obtaining convergent and complementing results. A statistical analysis, using Partial Least Squares (PLS), was carried out followed by a hermeneutical/dialectical analysis. Results confirmed a major finding: in problematic cases, paradoxically, the origins of IS rejection reasons argued by the EU, at a high degree, were usually traceable to the EUR and themselves. The results also confirmed the prevalence of cognitive, behavioral and political factors in these organizations as well as the high risk of taking for granted the presence and quality of those factors demanded from the EUR. The statistical validation of the proposed model revealed the construct EUR knowledge as the main latent variable, with its items exerting a major influence on IS quality and success. Another finding, in contradiction with that of other studies, proved knowledge of information technology (IT) aspects to be irrelevant. The payroll and the human resources administration IS were the most problematic, as is usually the case in large companies. The main conclusions confirm the EU´s decisive role for IS success and their relationship with the problem, which continues, demanding more research and, from organizations, more attention and preparation. Neglecting human and social factors in organizations as well as their effective planning and management in preparation for ISDP poses serious risks. Despite the limitations of this work, the analyzed problem tends to influence ISDP in a wide range of organizations; regardless of their size or type of IS, thus showing a high degree of generalization. Therefore it is believed that the results, conclusions and suggestions of this research have a high degree of generalization. A detailed checklist comprising key measures is provided for preventive actions. Finally, the factors evaluated can be used to expand the well-known model of DeLone & McLean (2003), by connecting them as latent variables of its independent variables information quality and IS quality.
Resumo:
El primer procesamiento estricto realizado con el software científico Bernese y contemplando las más estrictas normas de cálculo recomendadas internacionalmente, permitió obtener un campo puntual de alta exactitud, basado en la integración y estandarización de los datos de una red GPS ubicada en Costa Rica. Este procesamiento contempló un total de 119 semanas de datos diarios, es decir unos 2,3 años, desde enero del año 2009 hasta abril del año 2011, para un total de 30 estaciones GPS, de las cuales 22 están ubicadas en el territorio nacional de Costa Rica y 8 internaciones pertenecientes a la red del Sistema Geocéntrico para las Américas (SIRGAS). Las denominadas soluciones semilibres generaron, semana a semana, una red GPS con una alta exactitud interna definida por medio de los vectores entre las estaciones y las coordenadas finales de la constelación satelital. La evaluación semanal dada por la repetibilidad de las soluciones brindó en promedio errores de 1,7 mm, 1,4 mm y 5,1 mm en las componentes [n e u], confirmando una alta consistencia en estas soluciones. Aunque las soluciones semilibres poseen una alta exactitud interna, las mismas no son utilizables para fines de análisis cinemático, pues carecen de un marco de referencia. En Latinoamérica, la densificación del Marco Internacional Terrestre de Referencia (ITRF), está representado por la red de estaciones de operación continua GNSS de SIRGAS, denominada como SIRGAS-CON. Por medio de las denominadas coordenadas semanales finales de las 8 estaciones consideradas como vínculo, se refirió cada una de las 119 soluciones al marco SIRGAS. La introducción del marco de referencia SIRGAS a las soluciones semilibres produce deformaciones en estas soluciones. Las deformaciones de las soluciones semilibres son producto de las cinemática de cada una de las placas en las que se ubican las estaciones de vínculo. Luego de efectuado el amarre semanal a las coordenadas SIRGAS, se hizo una estimación de los vectores de velocidad de cada una de las estaciones, incluyendo las de amarre, cuyos valores de velocidad se conocen con una alta exactitud. Para la determinación de las velocidades de las estaciones costarricenses, se programó una rutina en ambiente MatLab, basada en una ajuste por mínimos cuadrados. Los valores obtenidos en el marco de este proyecto en comparación con los valores oficiales, brindaron diferencias promedio del orden de los 0,06 cm/a, -0,08 cm/a y -0,10 cm/a respectivamente para las coordenadas [X Y Z]. De esta manera se logró determinar las coordenadas geocéntricas [X Y Z]T y sus variaciones temporales [vX vY vZ]T para el conjunto de 22 estaciones GPS de Costa Rica, dentro del datum IGS05, época de referencia 2010,5. Aunque se logró una alta exactitud en los vectores de coordenadas geocéntricas de las 22 estaciones, para algunas de las estaciones el cálculo de las velocidades no fue representativo debido al relativo corto tiempo (menos de un año) de archivos de datos. Bajo esta premisa, se excluyeron las ocho estaciones ubicadas al sur de país. Esto implicó hacer una estimación del campo local de velocidades con solamente veinte estaciones nacionales más tres estaciones en Panamá y una en Nicaragua. El algoritmo usado fue el denominado Colocación por Mínimos Cuadrados, el cual permite la estimación o interpolación de datos a partir de datos efectivamente conocidos, el cual fue programado mediante una rutina en ambiente MatLab. El campo resultante se estimó con una resolución de 30' X 30' y es altamente constante, con una velocidad resultante promedio de 2,58 cm/a en una dirección de 40,8° en dirección noreste. Este campo fue validado con base en los datos del modelo VEMOS2009, recomendado por SIRGAS. Las diferencias de velocidad promedio para las estaciones usadas como insumo para el cálculo del campo fueron del orden los +0,63 cm/a y +0,22 cm/a para los valores de velocidad en latitud y longitud, lo que supone una buena determinación de los valores de velocidad y de la estimación de la función de covarianza empírica, necesaria para la aplicación del método de colocación. Además, la grilla usada como base para la interpolación brindó diferencias del orden de -0,62 cm/a y -0,12 cm/a para latitud y longitud. Adicionalmente los resultados de este trabajo fueron usados como insumo para hacer una aproximación en la definición del límite del llamado Bloque de Panamá dentro del territorio nacional de Costa Rica. El cálculo de las componentes del Polo de Euler por medio de una rutina programa en ambiente MatLab y aplicado a diferentes combinaciones de puntos no brindó mayores aportes a la definición física de este límite. La estrategia lo que confirmó fue simplemente la diferencia en la dirección de todos los vectores velocidad y no permitió reveló revelar con mayor detalle una ubicación de esta zona dentro del territorio nacional de Costa Rica. ABSTRACT The first strict processing performed with the Bernese scientific software and contemplating the highest standards internationally recommended calculation, yielded a precise field of high accuracy, based on the integration and standardization of data from a GPS network located in Costa Rica. This processing watched a total of 119 weeks of daily data, is about 2.3 years from January 2009 to April 2011, for a total of 30 GPS stations, of which 22 are located in the country of Costa Rica and 8 hospitalizations within the network of Geocentric System for the Americas (SIRGAS). The semi-free solutions generated, every week a GPS network with high internal accuracy defined by vectors between stations and the final coordinates of the satellite constellation. The weekly evaluation given by repeatability of the solutions provided in average errors of 1.7 mm 1.4 mm and 5.1 mm in the components [n e u], confirming a high consistency in these solutions. Although semi-free solutions have a high internal accuracy, they are not used for purposes of kinematic analysis, because they lack a reference frame. In Latin America, the densification of the International Terrestrial Reference Frame (ITRF), is represented by a network of continuously operating GNSS stations SIRGAS, known as SIRGAS-CON. Through weekly final coordinates of the 8 stations considered as a link, described each of the solutions to the frame 119 SIRGAS. The introduction of the frame SIRGAS to semi-free solutions generates deformations. The deformations of the semi-free solutions are products of the kinematics of each of the plates in which link stations are located. After SIRGAS weekly link to SIRGAS frame, an estimate of the velocity vectors of each of the stations was done. The velocity vectors for each SIRGAS stations are known with high accuracy. For this calculation routine in MatLab environment, based on a least squares fit was scheduled. The values obtained compared to the official values, gave average differences of the order of 0.06 cm/yr, -0.08 cm/yr and -0.10 cm/yr respectively for the coordinates [XYZ]. Thus was possible to determine the geocentric coordinates [XYZ]T and its temporal variations [vX vY vZ]T for the set of 22 GPS stations of Costa Rica, within IGS05 datum, reference epoch 2010.5. The high accuracy vector for geocentric coordinates was obtained, however for some stations the velocity vectors was not representative because of the relatively short time (less than one year) of data files. Under this premise, the eight stations located in the south of the country were excluded. This involved an estimate of the local velocity field with only twenty national stations plus three stations in Panama and Nicaragua. The algorithm used was Least Squares Collocation, which allows the estimation and interpolation of data from known data effectively. The algorithm was programmed with MatLab. The resulting field was estimated with a resolution of 30' X 30' and is highly consistent with a resulting average speed of 2.58 cm/y in a direction of 40.8° to the northeast. This field was validated based on the model data VEMOS2009 recommended by SIRGAS. The differences in average velocity for the stations used as input for the calculation of the field were of the order of +0.63 cm/yr, +0.22 cm/yr for the velocity values in latitude and longitude, which is a good determination velocity values and estimating the empirical covariance function necessary for implementing the method of application. Furthermore, the grid used as the basis for interpolation provided differences of about -0.62 cm/yr, -0.12 cm/yr to latitude and longitude. Additionally, the results of this investigation were used as input to an approach in defining the boundary of Panama called block within the country of Costa Rica. The calculation of the components of the Euler pole through a routine program in MatLab and applied to different combinations of points gave no further contributions to the physical definition of this limit. The strategy was simply confirming the difference in the direction of all the velocity vectors and not allowed to reveal more detail revealed a location of this area within the country of Costa Rica.
Resumo:
The building sector has experienced a significant decline in recent years in Spain and Europe as a result of the financial crisis that began in 2007. This drop accompanies a low penetration of information and communication technologies in inter-organizational oriented business processes. The market decrease is causing a slowdown in the building sector, where only flexible small and medium enterprises (SMEs) survive thanks to specialization and innovation in services, which allow them to face new market demands. Inter-organizational information systems (IOISs) support innovation in services, and are thus a strategic tool for SMEs to obtain competitive advantage. Because of the inherent complexity of IOIS adoption, this research extends Kurnia and Johnston's (2000) theoretical model of IOIS adoption with an empirical model of IOIS characterization. The resultant model identifies the factors influencing IOIS adoption in SMEs in the building sector, to promote further service innovation for competitive and collaborative advantages. An empirical longitudinal study over six consecutive years using data from Spanish SMEs in the building sector validates the model, using the partial least squares technique and analyzing temporal stability. The main findings of this research are the four ways an IOIS might contribute to service innovation in the building sector. Namely: a) improving client interfaces and the link between service providers and end users; b) defining a specific market where SMEs can develop new service concepts; c) enhancing the service delivery system in traditional customer?supplier relationships; and d) introducing information and communication technologies and tools to improve information management.