890 resultados para theory-in-use
Resumo:
Introduction: According to the ecological view, coordination establishes byvirtueof social context. Affordances thought of as situational opportunities to interact are assumed to represent the guiding principles underlying decisions involved in interpersonal coordination. It’s generally agreed that affordances are not an objective part of the (social) environment but that they depend on the constructive perception of involved subjects. Theory and empirical data hold that cognitive operations enabling domain-specific efficacy beliefs are involved in the perception of affordances. The aim of the present study was to test the effects of these cognitive concepts in the subjective construction of local affordances and their influence on decision making in football. Methods: 71 football players (M = 24.3 years, SD = 3.3, 21 % women) from different divisions participated in the study. Participants were presented scenarios of offensive game situations. They were asked to take the perspective of the person on the ball and to indicate where they would pass the ball from within each situation. The participants stated their decisions in two conditions with different game score (1:0 vs. 0:1). The playing fields of all scenarios were then divided into ten zones. For each zone, participants were asked to rate their confidence in being able to pass the ball there (self-efficacy), the likelihood of the group staying in ball possession if the ball were passed into the zone (group-efficacy I), the likelihood of the ball being covered safely by a team member (pass control / group-efficacy II), and whether a pass would establish a better initial position to attack the opponents’ goal (offensive convenience). Answers were reported on visual analog scales ranging from 1 to 10. Data were analyzed specifying general linear models for binomially distributed data (Mplus). Maximum likelihood with non-normality robust standard errors was chosen to estimate parameters. Results: Analyses showed that zone- and domain-specific efficacy beliefs significantly affected passing decisions. Because of collinearity with self-efficacy and group-efficacy I, group-efficacy II was excluded from the models to ease interpretation of the results. Generally, zones with high values in the subjective ratings had a higher probability to be chosen as passing destination (βself-efficacy = 0.133, p < .001, OR = 1.142; βgroup-efficacy I = 0.128, p < .001, OR = 1.137; βoffensive convenience = 0.057, p < .01, OR = 1.059). There were, however, characteristic differences in the two score conditions. While group-efficacy I was the only significant predictor in condition 1 (βgroup-efficacy I = 0.379, p < .001), only self-efficacy and offensive convenience contributed to passing decisions in condition 2 (βself-efficacy = 0.135, p < .01; βoffensive convenience = 0.120, p < .001). Discussion: The results indicate that subjectively distinct attributes projected to playfield zones affect passing decisions. The study proposes a probabilistic alternative to Lewin’s (1951) hodological and deterministic field theory and enables insight into how dimensions of the psychological landscape afford passing behavior. Being part of a team, this psychological landscape is not only constituted by probabilities that refer to the potential and consequences of individual behavior, but also to that of the group system of which individuals are part of. Hence, in regulating action decisions in group settings, informers are extended to aspects referring to the group-level. References: Lewin, K. (1951). In D. Cartwright (Ed.), Field theory in social sciences: Selected theoretical papers by Kurt Lewin. New York: Harper & Brothers.
Resumo:
The relationship between change in myocardial infarction (MI) mortality rate (ICD codes 410, 411) and change in use of percutaneous transluminal coronary angioplasty (PTCA), adjusted for change in hospitalization rates for MI, and for change in use of aortocoronary bypass surgery (ACBS) from 1985 through 1990 at private hospitals was examined in the biethnic community of Nueces County, Texas, site of the Corpus Christi Heart Project, a major coronary heart disease (CHD) surveillance program. Age-adjusted rates (per 100,000 persons) were calculated for each of these CHD events for the population aged 25 through 74 years and for each of the four major sex-ethnic groups: Mexican-American and Non-Hispanic White women and men. Over this six year period, there were 541 MI deaths, 2358 MI hospitalizations, 816 PTCA hospitalizations, and 920 ACBS hospitalizations among Mexican-American and Non-Hispanic White Nueces County residents. Acute MI mortality decreased from 24.7 in the first quarter of 1985 to 12.1 in the fourth quarter of 1990, a 51.2% decrease. All three hospitalization rates increased: The MI hospitalization rates increased from 44.1 to 61.3, a 38.9% increase, PTCA use increased from 7.1 to 23.2, a 228.0% increase, and ACBS use increased from 18.8 to 29.5, a 56.6% increase. In linear regression analyses, the change in MI mortality rate was negatively associated with the change in PTCA use (beta = $-$.266 $\pm$.103, p = 0.017) but was not associated with the changes in MI hospitalization rate and in ACBS use. The results of this ecologic research support the idea that the increasing use of PTCA, but not ACBS, has been associated with decreases in MI mortality. The contrast in associations between these two revascularization procedures and MI mortality highlights the need for research aimed at clarifying the proper roles of these procedures in the treatment of patients with CHD. The association between change in PTCA use and change in MI mortality supports the idea that some changes in medical treatment may be partially responsible for trends in CHD mortality. Differences in the use of therapies such as PTCA may be related to differences between geographical sites in CHD rates and trends. ^
Resumo:
Transcriptional enhancers are genomic DNA sequences that contain clustered transcription factor (TF) binding sites. When combinations of TFs bind to enhancer sequences they act together with basal transcriptional machinery to regulate the timing, location and quantity of gene transcription. Elucidating the genetic mechanisms responsible for differential gene expression, including the role of enhancers, during embryological and postnatal development is essential to an understanding of evolutionary processes and disease etiology. Numerous methods are in use to identify and characterize enhancers. Several high-throughput methods generate large datasets of enhancer sequences with putative roles in embryonic development. However, few enhancers have been deleted from the genome to determine their roles in the development of specific structures, such as the limb. Manipulation of enhancers at their endogenous loci, such as the deletion of such elements, leads to a better understanding of the regulatory interactions, rules and complexities that contribute to faithful and variant gene transcription – the molecular genetic substrate of evolution and disease. To understand the endogenous roles of two distinct enhancers known to be active in the mouse embryo limb bud we deleted them from the mouse genome. I hypothesized that deletion of these enhancers would lead to aberrant limb development. The enhancers were selected because of their association with p300, a protein associated with active transcription, and because the human enhancer sequences drive distinct lacZ expression patterns in limb buds of embryonic day (E) 11.5 transgenic mice. To confirm that the orthologous mouse enhancers, mouse 280 and 1442 (M280 and M1442, respectively), regulate expression in the developing limb we generated stable transgenic lines, and examined lacZ expression. In M280-lacZ mice, expression was detected in E11.5 fore- and hindlimbs in a region that corresponds to digits II-IV. M1442-lacZ mice exhibited lacZ expression in posterior and anterior margins of the fore- and hindlimbs that overlapped with digits I and V and several wrist bones. We generated mice lacking the M280 and M1442 enhancers by gene targeting. Intercrosses between M280 -/+ and M1442 -/+, respectively, generated M280 and M1442 null mice, which are born at expected Mendelian ratios and manifest no gross limb malformations. Quantitative real-time PCR of mutant E11.5 limb buds indicated that significant changes in transcriptional output of enhancer-proximal genes accompanied the deletion of both M280 and M1442. In neonatal null mice we observed that all limb bones are present in their expected positions, an observation also confirmed by histology of E18.5 distal limbs. Fine-scale measurement of E18.5 digit bone lengths found no differences between mutant and control embryos. Furthermore, when the developmental progression of cartilaginous elements was analyzed in M280 and M1442 embryos from E13.5-E15.5, transient development defects were not detected. These results demonstrate that M280 and M1442 are not required for mouse limb development. Though M280 is not required for embryonic limb development it is required for the development and/or maintenance of body size – adult M280 mice are significantly smaller than control littermates. These studies highlight the importance of experiments that manipulate enhancers in situ to understand their contribution to development.
Resumo:
This paper addresses the dynamics of world wine consumption over the past 50 years in 26 countries, verifying whether or not there is a macro-tendency towards a common consumption style, despite differences in taxation, economic policies and distribution systems among countries. From an empirical point of view, the σ and β convergence hypotheses were formally tested. Model results confirm the existence of both types of convergences. Per capita consumption of wine first experienced a reduction in differences between countries and then converged toward a central value. "Traditional" countries, with historically high levels of consumption, showed a decrease in wine consumption, while emerging countries with historically lower consumption levels showed an increase. These findings not only provide further support to the theory of international convergence of wine consumption on a volume basis, as already observed by other researchers in the European market, but they also offer support for the theory in major world markets. Furthermore, convergence appears to be happening not only at a quantitative level but at qualitative level as well, and this phenomenon may very well reflect the changing tastes of worldwide consumers towards a generalized structure of wine consumption.
Resumo:
Inter-individual variation in diet within generalist animal populations is thought to be a widespread phenomenon but its potential causes are poorly known. Inter-individual variation can be amplified by the availability and use of allochthonous resources, i.e., resources coming from spatially distinct ecosystems. Using a wild population of arctic fox as a study model, we tested hypotheses that could explain variation in both population and individual isotopic niches, used here as proxy for the trophic niche. The arctic fox is an opportunistic forager, dwelling in terrestrial and marine environments characterized by strong spatial (arctic-nesting birds) and temporal (cyclic lemmings) fluctuations in resource abundance. First, we tested the hypothesis that generalist foraging habits, in association with temporal variation in prey accessibility, should induce temporal changes in isotopic niche width and diet. Second, we investigated whether within-population variation in the isotopic niche could be explained by individual characteristics (sex and breeding status) and environmental factors (spatiotemporal variation in prey availability). We addressed these questions using isotopic analysis and Bayesian mixing models in conjunction with linear mixed-effects models. We found that: i) arctic fox populations can simultaneously undergo short-term (i.e., within a few months) reduction in both isotopic niche width and inter-individual variability in isotopic ratios, ii) individual isotopic ratios were higher and more representative of a marine-based diet for non-breeding than breeding foxes early in spring, and iii) lemming population cycles did not appear to directly influence the diet of individual foxes after taking their breeding status into account. However, lemming abundance was correlated to proportion of breeding foxes, and could thus indirectly affect the diet at the population scale.
Resumo:
Recently, steady economic growth rates have been kept in Poland and Hungary. Money supplies are growing rather rapidly in these economies. In large, exchange rates have trends of depreciation. Then, exports and prices show the steady growth rates. It can be thought that per capita GDPs are in the same level and development stages are similar in these two countries. It is assumed that these two economies have the same export market and export goods are competing in it. If one country has an expansion of monetary policy, price increase and interest rate decrease. Then, exchange rate decrease. Exports and GDP will increase through this phenomenon. At the same time, this expanded monetary policy affects another country through the trade. This mutual relationship between two countries can be expressed by the Nash-equilibrium in the Game theory. In this paper, macro-econometric models of Polish and Hungarian economies are built and the Nash- equilibrium is introduced into them.
Resumo:
The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.
Resumo:
This paper shows a physically cogent model for electrical noise in resistors that has been obtained from Thermodynamical reasons. This new model derived from the works of Johnson and Nyquist also agrees with the Quantum model for noisy systems handled by Callen and Welton in 1951, thus unifying these two Physical viewpoints. This new model is a Complex or 2-D noise model based on an Admittance that considers both Fluctuation and Dissipation of electrical energy to excel the Real or 1-D model in use that only considers Dissipation. By the two orthogonal currents linked with a common voltage noise by an Admittance function, the new model is shown in frequency domain. Its use in time domain allows to see the pitfall behind a paradox of Statistical Mechanics about systems considered as energy-conserving and deterministic on the microscale that are dissipative and unpredictable on the macroscale and also shows how to use properly the Fluctuation-Dissipation Theorem.
Resumo:
Overhead rail current collector systems for railway traction offer certain features, such as low installation height and reduced maintenance, which make them predominantly suitable for use in underground train infrastructures. Due to the increased demands of modern catenary systems and higher running speeds of new vehicles, a more capable design of the conductor rail is needed. A new overhead conductor rail has been developed and its design has been patented [13]. Modern simulation and modelling techniques were used in the development approach. The new conductor rail profile has a dynamic behaviour superior to that of the system currently in use. Its innovative design permits either an increase of catenary support spacing or a higher vehicle running speed. Both options ensure savings in installation or operating costs. The simulation model used to optimise the existing conductor rail profile included both a finite element model of the catenary and a three-dimensional multi-body system model of the pantograph. The contact force that appears between pantograph and catenary was obtained in simulation. A sensitivity analysis of the key parameters that influence in catenary dynamics was carried out, finally leading to the improved design.
Resumo:
El cálculo de relaciones binarias fue creado por De Morgan en 1860 para ser posteriormente desarrollado en gran medida por Peirce y Schröder. Tarski, Givant, Freyd y Scedrov demostraron que las álgebras relacionales son capaces de formalizar la lógica de primer orden, la lógica de orden superior así como la teoría de conjuntos. A partir de los resultados matemáticos de Tarski y Freyd, esta tesis desarrolla semánticas denotacionales y operacionales para la programación lógica con restricciones usando el álgebra relacional como base. La idea principal es la utilización del concepto de semántica ejecutable, semánticas cuya característica principal es el que la ejecución es posible utilizando el razonamiento estándar del universo semántico, este caso, razonamiento ecuacional. En el caso de este trabajo, se muestra que las álgebras relacionales distributivas con un operador de punto fijo capturan toda la teoría y metateoría estándar de la programación lógica con restricciones incluyendo los árboles utilizados en la búsqueda de demostraciones. La mayor parte de técnicas de optimización de programas, evaluación parcial e interpretación abstracta pueden ser llevadas a cabo utilizando las semánticas aquí presentadas. La demostración de la corrección de la implementación resulta extremadamente sencilla. En la primera parte de la tesis, un programa lógico con restricciones es traducido a un conjunto de términos relacionales. La interpretación estándar en la teoría de conjuntos de dichas relaciones coincide con la semántica estándar para CLP. Las consultas contra el programa traducido son llevadas a cabo mediante la reescritura de relaciones. Para concluir la primera parte, se demuestra la corrección y equivalencia operacional de esta nueva semántica, así como se define un algoritmo de unificación mediante la reescritura de relaciones. La segunda parte de la tesis desarrolla una semántica para la programación lógica con restricciones usando la teoría de alegorías—versión categórica del álgebra de relaciones—de Freyd. Para ello, se definen dos nuevos conceptos de Categoría Regular de Lawvere y _-Alegoría, en las cuales es posible interpretar un programa lógico. La ventaja fundamental que el enfoque categórico aporta es la definición de una máquina categórica que mejora e sistema de reescritura presentado en la primera parte. Gracias al uso de relaciones tabulares, la máquina modela la ejecución eficiente sin salir de un marco estrictamente formal. Utilizando la reescritura de diagramas, se define un algoritmo para el cálculo de pullbacks en Categorías Regulares de Lawvere. Los dominios de las tabulaciones aportan información sobre la utilización de memoria y variable libres, mientras que el estado compartido queda capturado por los diagramas. La especificación de la máquina induce la derivación formal de un juego de instrucciones eficiente. El marco categórico aporta otras importantes ventajas, como la posibilidad de incorporar tipos de datos algebraicos, funciones y otras extensiones a Prolog, a la vez que se conserva el carácter 100% declarativo de nuestra semántica. ABSTRACT The calculus of binary relations was introduced by De Morgan in 1860, to be greatly developed by Peirce and Schröder, as well as many others in the twentieth century. Using different formulations of relational structures, Tarski, Givant, Freyd, and Scedrov have shown how relation algebras can provide a variable-free way of formalizing first order logic, higher order logic and set theory, among other formal systems. Building on those mathematical results, we develop denotational and operational semantics for Constraint Logic Programming using relation algebra. The idea of executable semantics plays a fundamental role in this work, both as a philosophical and technical foundation. We call a semantics executable when program execution can be carried out using the regular theory and tools that define the semantic universe. Throughout this work, the use of pure algebraic reasoning is the basis of denotational and operational results, eliminating all the classical non-equational meta-theory associated to traditional semantics for Logic Programming. All algebraic reasoning, including execution, is performed in an algebraic way, to the point we could state that the denotational semantics of a CLP program is directly executable. Techniques like optimization, partial evaluation and abstract interpretation find a natural place in our algebraic models. Other properties, like correctness of the implementation or program transformation are easy to check, as they are carried out using instances of the general equational theory. In the first part of the work, we translate Constraint Logic Programs to binary relations in a modified version of the distributive relation algebras used by Tarski. Execution is carried out by a rewriting system. We prove adequacy and operational equivalence of the semantics. In the second part of the work, the relation algebraic approach is improved by using allegory theory, a categorical version of the algebra of relations developed by Freyd and Scedrov. The use of allegories lifts the semantics to typed relations, which capture the number of logical variables used by a predicate or program state in a declarative way. A logic program is interpreted in a _-allegory, which is in turn generated from a new notion of Regular Lawvere Category. As in the untyped case, program translation coincides with program interpretation. Thus, we develop a categorical machine directly from the semantics. The machine is based on relation composition, with a pullback calculation algorithm at its core. The algorithm is defined with the help of a notion of diagram rewriting. In this operational interpretation, types represent information about memory allocation and the execution mechanism is more efficient, thanks to the faithful representation of shared state by categorical projections. We finish the work by illustrating how the categorical semantics allows the incorporation into Prolog of constructs typical of Functional Programming, like abstract data types, and strict and lazy functions.
Resumo:
The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes
Resumo:
Las Tecnologías de la Información y la Comunicación en general e Internet en particular han supuesto una revolución en nuestra forma de comunicarnos, relacionarnos, producir, comprar y vender acortando tiempo y distancias entre proveedores y consumidores. A la paulatina penetración del ordenador, los teléfonos inteligentes y la banda ancha fija y/o móvil ha seguido un mayor uso de estas tecnologías entre ciudadanos y empresas. El comercio electrónico empresa–consumidor (B2C) alcanzó en 2010 en España un volumen de 9.114 millones de euros, con un incremento del 17,4% respecto al dato registrado en 2009. Este crecimiento se ha producido por distintos hechos: un incremento en el porcentaje de internautas hasta el 65,1% en 2010 de los cuales han adquirido productos o servicios a través de la Red un 43,1% –1,6 puntos porcentuales más respecto a 2010–. Por otra parte, el gasto medio por comprador ha ascendido a 831€ en 2010, lo que supone un incremento del 10,9% respecto al año anterior. Si segmentamos a los compradores según por su experiencia anterior de compra podemos encontrar dos categorías: el comprador novel –que adquirió por primera vez productos o servicios en 2010– y el comprador constante –aquel que había adquirido productos o servicios en 2010 y al menos una vez en años anteriores–. El 85,8% de los compradores se pueden considerar como compradores constantes: habían comprado en la Red en 2010, pero también lo habían hecho anteriormente. El comprador novel tiene un perfil sociodemográfico de persona joven de entre 15–24 años, con estudios secundarios, de clase social media y media–baja, estudiante no universitario, residente en poblaciones pequeñas y sigue utilizando fórmulas de pago como el contra–reembolso (23,9%). Su gasto medio anual ascendió en 2010 a 449€. El comprador constante, o comprador que ya había comprado en Internet anteriormente, tiene un perfil demográfico distinto: estudios superiores, clase alta, trabajador y residente en grandes ciudades, con un comportamiento maduro en la compra electrónica dada su mayor experiencia –utiliza con mayor intensidad canales exclusivos en Internet que no disponen de tienda presencial–. Su gasto medio duplica al observado en compradores noveles (con una media de 930€ anuales). Por tanto, los compradores constantes suponen una mayoría de los compradores con un gasto medio que dobla al comprador que ha adoptado el medio recientemente. Por consiguiente es de interés estudiar los factores que predicen que un internauta vuelva a adquirir un producto o servicio en la Red. La respuesta a esta pregunta no se ha revelado sencilla. En España, la mayoría de productos y servicios aún se adquieren de manera presencial, con una baja incidencia de las ventas a distancia como la teletienda, la venta por catálogo o la venta a través de Internet. Para dar respuesta a las preguntas planteadas se ha investigado desde distintos puntos de vista: se comenzará con un estudio descriptivo desde el punto de vista de la demanda que trata de caracterizar la situación del comercio electrónico B2C en España, poniendo el foco en las diferencias entre los compradores constantes y los nuevos compradores. Posteriormente, la investigación de modelos de adopción y continuidad en el uso de las tecnologías y de los factores que inciden en dicha continuidad –con especial interés en el comercio electrónico B2C–, permiten afrontar el problema desde la perspectiva de las ecuaciones estructurales pudiendo también extraer conclusiones de tipo práctico. Este trabajo sigue una estructura clásica de investigación científica: en el capítulo 1 se introduce el tema de investigación, continuando con una descripción del estado de situación del comercio electrónico B2C en España utilizando fuentes oficiales (capítulo 2). Posteriormente se desarrolla el marco teórico y el estado del arte de modelos de adopción y de utilización de las tecnologías (capítulo 3) y de los factores principales que inciden en la adopción y continuidad en el uso de las tecnologías (capítulo 4). El capítulo 5 desarrolla las hipótesis de la investigación y plantea los modelos teóricos. Las técnicas estadísticas a utilizar se describen en el capítulo 6, donde también se analizan los resultados empíricos sobre los modelos desarrollados en el capítulo 5. El capítulo 7 expone las principales conclusiones de la investigación, sus limitaciones y propone nuevas líneas de investigación. La primera parte corresponde al capítulo 1, que introduce la investigación justificándola desde un punto de vista teórico y práctico. También se realiza una breve introducción a la teoría del comportamiento del consumidor desde una perspectiva clásica. Se presentan los principales modelos de adopción y se introducen los modelos de continuidad de utilización que se estudiarán más detalladamente en el capítulo 3. En este capítulo se desarrollan los objetivos principales y los objetivos secundarios, se propone el mapa mental de la investigación y se planifican en un cronograma los principales hitos del trabajo. La segunda parte corresponde a los capítulos dos, tres y cuatro. En el capítulo 2 se describe el comercio electrónico B2C en España utilizando fuentes secundarias. Se aborda un diagnóstico del sector de comercio electrónico y su estado de madurez en España. Posteriormente, se analizan las diferencias entre los compradores constantes, principal interés de este trabajo, frente a los compradores noveles, destacando las diferencias de perfiles y usos. Para los dos segmentos se estudian aspectos como el lugar de acceso a la compra, la frecuencia de compra, los medios de pago utilizados o las actitudes hacia la compra. El capítulo 3 comienza desarrollando los principales conceptos sobre la teoría del comportamiento del consumidor, para continuar estudiando los principales modelos de adopción de tecnología existentes, analizando con especial atención su aplicación en comercio electrónico. Posteriormente se analizan los modelos de continuidad en el uso de tecnologías (Teoría de la Confirmación de Expectativas; Teoría de la Justicia), con especial atención de nuevo a su aplicación en el comercio electrónico. Una vez estudiados los principales modelos de adopción y continuidad en el uso de tecnologías, el capítulo 4 analiza los principales factores que se utilizan en los modelos: calidad, valor, factores basados en la confirmación de expectativas –satisfacción, utilidad percibida– y factores específicos en situaciones especiales –por ejemplo, tras una queja– como pueden ser la justicia, las emociones o la confianza. La tercera parte –que corresponde al capítulo 5– desarrolla el diseño de la investigación y la selección muestral de los modelos. En la primera parte del capítulo se enuncian las hipótesis –que van desde lo general a lo particular, utilizando los factores específicos analizados en el capítulo 4– para su posterior estudio y validación en el capítulo 6 utilizando las técnicas estadísticas apropiadas. A partir de las hipótesis, y de los modelos y factores estudiados en los capítulos 3 y 4, se definen y vertebran dos modelos teóricos originales que den respuesta a los retos de investigación planteados en el capítulo 1. En la segunda parte del capítulo se diseña el trabajo empírico de investigación definiendo los siguientes aspectos: alcance geográfico–temporal, tipología de la investigación, carácter y ambiente de la investigación, fuentes primarias y secundarias utilizadas, técnicas de recolección de datos, instrumentos de medida utilizados y características de la muestra utilizada. Los resultados del trabajo de investigación constituyen la cuarta parte de la investigación y se desarrollan en el capítulo 6, que comienza analizando las técnicas estadísticas basadas en Modelos de Ecuaciones Estructurales. Se plantean dos alternativas, modelos confirmatorios correspondientes a Métodos Basados en Covarianzas (MBC) y modelos predictivos. De forma razonada se eligen las técnicas predictivas dada la naturaleza exploratoria de la investigación planteada. La segunda parte del capítulo 6 desarrolla el análisis de los resultados de los modelos de medida y modelos estructurales construidos con indicadores formativos y reflectivos y definidos en el capítulo 4. Para ello se validan, sucesivamente, los modelos de medida y los modelos estructurales teniendo en cuenta los valores umbrales de los parámetros estadísticos necesarios para la validación. La quinta parte corresponde al capítulo 7, que desarrolla las conclusiones basándose en los resultados del capítulo 6, analizando los resultados desde el punto de vista de las aportaciones teóricas y prácticas, obteniendo conclusiones para la gestión de las empresas. A continuación, se describen las limitaciones de la investigación y se proponen nuevas líneas de estudio sobre distintos temas que han ido surgiendo a lo largo del trabajo. Finalmente, la bibliografía recoge todas las referencias utilizadas a lo largo de este trabajo. Palabras clave: comprador constante, modelos de continuidad de uso, continuidad en el uso de tecnologías, comercio electrónico, B2C, adopción de tecnologías, modelos de adopción tecnológica, TAM, TPB, IDT, UTAUT, ECT, intención de continuidad, satisfacción, confianza percibida, justicia, emociones, confirmación de expectativas, calidad, valor, PLS. ABSTRACT Information and Communication Technologies in general, but more specifically those related to the Internet in particular, have changed the way in which we communicate, relate to one another, produce, and buy and sell products, reducing the time and shortening the distance between suppliers and consumers. The steady breakthrough of computers, Smartphones and landline and/or wireless broadband has been greatly reflected in its large scale use by both individuals and businesses. Business–to–consumer (B2C) e–commerce reached a volume of 9,114 million Euros in Spain in 2010, representing a 17.4% increase with respect to the figure in 2009. This growth is due in part to two different facts: an increase in the percentage of web users to 65.1% en 2010, 43.1% of whom have acquired products or services through the Internet– which constitutes 1.6 percentage points higher than 2010. On the other hand, the average spending by individual buyers rose to 831€ en 2010, constituting a 10.9% increase with respect to the previous year. If we select buyers according to whether or not they have previously made some type of purchase, we can divide them into two categories: the novice buyer–who first made online purchases in 2010– and the experienced buyer: who also made purchases in 2010, but had done so previously as well. The socio–demographic profile of the novice buyer is that of a young person between 15–24 years of age, with secondary studies, middle to lower–middle class, and a non–university educated student who resides in smaller towns and continues to use payment methods such as cash on delivery (23.9%). In 2010, their average purchase grew to 449€. The more experienced buyer, or someone who has previously made purchases online, has a different demographic profile: highly educated, upper class, resident and worker in larger cities, who exercises a mature behavior when making online purchases due to their experience– this type of buyer frequently uses exclusive channels on the Internet that don’t have an actual store. His or her average purchase doubles that of the novice buyer (with an average purchase of 930€ annually.) That said, the experienced buyers constitute the majority of buyers with an average purchase that doubles that of novice buyers. It is therefore of interest to study the factors that help to predict whether or not a web user will buy another product or use another service on the Internet. The answer to this question has proven not to be so simple. In Spain, the majority of goods and services are still bought in person, with a low amount of purchases being made through means such as the Home Shopping Network, through catalogues or Internet sales. To answer the questions that have been posed here, an investigation has been conducted which takes into consideration various viewpoints: it will begin with a descriptive study from the perspective of the supply and demand that characterizes the B2C e–commerce situation in Spain, focusing on the differences between experienced buyers and novice buyers. Subsequently, there will be an investigation concerning the technology acceptance and continuity of use of models as well as the factors that have an effect on their continuity of use –with a special focus on B2C electronic commerce–, which allows for a theoretic approach to the problem from the perspective of the structural equations being able to reach practical conclusions. This investigation follows the classic structure for a scientific investigation: the subject of the investigation is introduced (Chapter 1), then the state of the B2C e–commerce in Spain is described citing official sources of information (Chapter 2), the theoretical framework and state of the art of technology acceptance and continuity models are developed further (Chapter 3) and the main factors that affect their acceptance and continuity (Chapter 4). Chapter 5 explains the hypothesis behind the investigation and poses the theoretical models that will be confirmed or rejected partially or completely. In Chapter 6, the technical statistics that will be used are described briefly as well as an analysis of the empirical results of the models put forth in Chapter 5. Chapter 7 explains the main conclusions of the investigation, its limitations and proposes new projects. First part of the project, chapter 1, introduces the investigation, justifying it from a theoretical and practical point of view. It is also a brief introduction to the theory of consumer behavior from a standard perspective. Technology acceptance models are presented and then continuity and repurchase models are introduced, which are studied more in depth in Chapter 3. In this chapter, both the main and the secondary objectives are developed through a mind map and a timetable which highlights the milestones of the project. The second part of the project corresponds to Chapters Two, Three and Four. Chapter 2 describes the B2C e–commerce in Spain from the perspective of its demand, citing secondary official sources. A diagnosis concerning the e–commerce sector and the status of its maturity in Spain is taken on, as well as the barriers and alternative methods of e–commerce. Subsequently, the differences between experienced buyers, which are of particular interest to this project, and novice buyers are analyzed, highlighting the differences between their profiles and their main transactions. In order to study both groups, aspects such as the place of purchase, frequency with which online purchases are made, payment methods used and the attitudes of the purchasers concerning making online purchases are taken into consideration. Chapter 3 begins by developing the main concepts concerning consumer behavior theory in order to continue the study of the main existing acceptance models (among others, TPB, TAM, IDT, UTAUT and other models derived from them) – paying special attention to their application in e–commerce–. Subsequently, the models of technology reuse are analyzed (CDT, ECT; Theory of Justice), focusing again specifically on their application in e–commerce. Once the main technology acceptance and reuse models have been studied, Chapter 4 analyzes the main factors that are used in these models: quality, value, factors based on the contradiction of expectations/failure to meet expectations– satisfaction, perceived usefulness– and specific factors pertaining to special situations– for example, after receiving a complaint justice, emotions or confidence. The third part– which appears in Chapter 5– develops the plan for the investigation and the sample selection for the models that have been designed. In the first section of the Chapter, the hypothesis is presented– beginning with general ideas and then becoming more specific, using the detailed factors that were analyzed in Chapter 4– for its later study and validation in Chapter 6– as well as the corresponding statistical factors. Based on the hypothesis and the models and factors that were studied in Chapters 3 and 4, two original theoretical models are defined and organized in order to answer the questions posed in Chapter 1. In the second part of the Chapter, the empirical investigation is designed, defining the following aspects: geographic–temporal scope, type of investigation, nature and setting of the investigation, primary and secondary sources used, data gathering methods, instruments according to the extent of their use and characteristics of the sample used. The results of the project constitute the fourth part of the investigation and are developed in Chapter 6, which begins analyzing the statistical techniques that are based on the Models of Structural Equations. Two alternatives are put forth: confirmatory models which correspond to Methods Based on Covariance (MBC) and predictive models– Methods Based on Components–. In a well–reasoned manner, the predictive techniques are chosen given the explorative nature of the investigation. The second part of Chapter 6 explains the results of the analysis of the measurement models and structural models built by the formative and reflective indicators defined in Chapter 4. In order to do so, the measurement models and the structural models are validated one by one, while keeping in mind the threshold values of the necessary statistic parameters for their validation. The fifth part corresponds to Chapter 7 which explains the conclusions of the study, basing them on the results found in Chapter 6 and analyzing them from the perspective of the theoretical and practical contributions, and consequently obtaining conclusions for business management. The limitations of the investigation are then described and new research lines about various topics that came up during the project are proposed. Lastly, all of the references that were used during the project are listed in a final bibliography. Key Words: constant buyer, repurchase models, continuity of use of technology, e–commerce, B2C, technology acceptance, technology acceptance models, TAM, TPB, IDT, UTAUT, ECT, intention of repurchase, satisfaction, perceived trust/confidence, justice, feelings, the contradiction of expectations, quality, value, PLS.
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
This paper describes an interactive set of tools used to determine the safety of tunnels and to provide data for the decision making of its mainteinance. Although, no doubt, there are still several drawbacks in the difficult procedures in use it is clear that the way is promising and future improvements both in experimental and analytical methods will increase our understanding of this matter.
Resumo:
The presented works aim at proposing a methodology for the simulation of offshore wind conditions using CFD. The main objective is the development of a numerical model for the characterization of atmospheric boundary layers of different stability levels, as the most important issue in offshore wind resource assessment. Based on Monin-Obukhov theory, the steady k-ε Standard turbulence model is modified to take into account thermal stratification in the surface layer. The validity of Monin-Obukhov theory in offshore conditions is discussed with an analysis of a three day episode at FINO-1 platform.