938 resultados para Theoretical models
Resumo:
El estudio de los vínculos entre distintas comunidades generó diversos modelos teóricos e interpretaciones que fueron desarrollados para el estudio de las relaciones de intercambio existentes entre las poblaciones antiguas. En general, estas posturas teóricas se pueden diferenciar entre las que se concentran específicamente en los vínculos de intercambio establecidos entre distintas comunidades y diferentes áreas, y las que analizan el efecto de ellos en los procesos internos de una comunidad. En este trabajo nos enfocamos en el primer aspecto, ya que nuestro objeto es abordar las relaciones de intercambio existentes desde la Alta Nubia hasta el Levante, atravesando Baja Nubia, Alto Egipto y Bajo Egipto durante el período que se extiende desde el 3400 a.C al 3000 a.C., aplicando la teoría sistema-mundo y los análisis de los sistemas-mundo
Resumo:
Context. Young, nearby stars are ideal targets for direct imaging searches for giant planets and brown dwarf companions. After the first-imaged planet discoveries, vast efforts have been devoted to the statistical analysis of the occurence and orbital distributions of giant planets and brown dwarf companions at wide (>= 5-6 AU) orbits. Aims. In anticipation of the VLT/SPHERE planet-imager, guaranteed-time programs, we have conducted a preparatory survey of 86 stars between 2009 and 2013 to identify new faint comoving companions to ultimately analyze the occurence of giant planets and brown dwarf companions at wide (10-2000 AU) orbits around young, solar-type stars. Methods. We used NaCo at VLT to explore the occurrence rate of giant planets and brown dwarfs between typically 0.1 and 8 ''. Diffraction-limited observations in H-band combined with angular differential imaging enabled us to reach primary star-companion brightness ratios as small as 10(-6) at 1.5 ''. Repeated observations at several epochs enabled us to discriminate comoving companions from background objects. Results. During our survey, twelve systems were resolved as new binaries, including the discovery of a new white dwarf companion to the star HD8049. Around 34 stars, at least one companion candidate was detected in the observed field of view. More than 400 faint sources were detected; 90% of them were in four crowded fields. With the exception of HD8049 B, we did not identify any new comoving companions. The survey also led to spatially resolved images of the thin debris disk around HD61005 that have been published earlier. Finally, considering the survey detection limits, we derive a preliminary upper limit on the frequency of giant planets for the semi-major axes of [10, 2000] AU: typically less than 15% between 100 and 500 AU and less than 10% between 50 and 500 AU for exoplanets that are more massive than 5 M-Jup and 10 M-Jup respectively, if we consider a uniform input distribution and a confidence level of 95%. Conclusions. The results from this survey agree with earlier programs emphasizing that massive, gas giant companions on wide orbits around solar-type stars are rare. These results will be part of a broader analysis of a total of similar to 210 young, solar-type stars to bring further statistical constraints for theoretical models of planetary formation and evolution.
Resumo:
This paper investigates how exchange rates affect the utilization of a free trade agreement (FTA) scheme in trading. Changes in exchange rates affect FTA utilization by two ways. The first way is by changing the excess profits gained by utilizing the FTA scheme, and the second way is by promoting the compliance of rules of origin. Our theoretical models predict that the depreciation of exporters' currency against that of importers enhances the likelihood of FTA utilization through those two channels. Furthermore, our empirical analysis, which is based on rich tariff-line-level data on the utilization of FTA schemes in Korea's imports from ASEAN countries, supports the theoretical prediction. We also show that the effects are smaller for more differentiated products.
Resumo:
We present an analysis of the space-time dynamics of oceanic sea states exploiting stereo imaging techniques. In particular, a novel Wave Acquisition Stereo System (WASS) has been developed and deployed at the oceanographic tower Acqua Alta in the Northern Adriatic Sea, off the Venice coast in Italy. The analysis of WASS video measurements yields accurate estimates of the oceanic sea state dynamics, the associated directional spectra and wave surface statistics that agree well with theoretical models. Finally, we show that a space-time extreme, defined as the expected largest surface wave height over an area, is considerably larger than the maximum crest observed in time at a point, in agreement with theoretical predictions.
Resumo:
The objective of this thesis is the development of cooperative localization and tracking algorithms using nonparametric message passing techniques. In contrast to the most well-known techniques, the goal is to estimate the posterior probability density function (PDF) of the position of each sensor. This problem can be solved using Bayesian approach, but it is intractable in general case. Nevertheless, the particle-based approximation (via nonparametric representation), and an appropriate factorization of the joint PDFs (using message passing methods), make Bayesian approach acceptable for inference in sensor networks. The well-known method for this problem, nonparametric belief propagation (NBP), can lead to inaccurate beliefs and possible non-convergence in loopy networks. Therefore, we propose four novel algorithms which alleviate these problems: nonparametric generalized belief propagation (NGBP) based on junction tree (NGBP-JT), NGBP based on pseudo-junction tree (NGBP-PJT), NBP based on spanning trees (NBP-ST), and uniformly-reweighted NBP (URW-NBP). We also extend NBP for cooperative localization in mobile networks. In contrast to the previous methods, we use an optional smoothing, provide a novel communication protocol, and increase the efficiency of the sampling techniques. Moreover, we propose novel algorithms for distributed tracking, in which the goal is to track the passive object which cannot locate itself. In particular, we develop distributed particle filtering (DPF) based on three asynchronous belief consensus (BC) algorithms: standard belief consensus (SBC), broadcast gossip (BG), and belief propagation (BP). Finally, the last part of this thesis includes the experimental analysis of some of the proposed algorithms, in which we found that the results based on real measurements are very similar with the results based on theoretical models.
Resumo:
El presente trabajo se refiere al estudio teórico-experimental del comportamiento de pilares y vigas de hormigón armado reforzados con fibra de carbono o CFRP. El análisis se realiza considerando que los pilares se refuerzan mediante la técnica de adhesión de tejidos de fibra de carbono, generando un efecto de confinamiento. Las vigas se refuerzan mediante la incorporación de barras del mismo material, con refuerzos a cortante. El objetivo es poder comparar el estudio analítico de este tipo de refuerzos con resultados experimentales obtenidos con anterioridad a la realización de este documento, y así poder obtener conclusiones de las posibles diferencias. Hay que señalar que los modelos experimentales no forman parte de este estudio. Los ensayos en pilares fueron realizados en sección cuadrada y circular evaluando la rotura a compresión de las piezas, habiendo sido éstas escaladas con un factor de reducción de 2,3. Los ensayos correspondientes a vigas se realizaron en sección rectangular, centrándose en la evaluación de la rotura a flexión y habiendo sido escaladas igualmente, pero con un factor de reducción de 1:2. El documento se estructura en cuatro capítulos, cuyo contenido se expone de forma concisa a continuación. En el capítulo uno o marco teórico se exponen los principios de comportamiento y tipologías de los pilares y vigas de hormigón armado, las bases teóricas de su refuerzo y confinamiento, así como las diversas técnicas de refuerzo existentes. Se detalla la técnica con FRP, comparando y analizando sus ventajas e inconvenientes. En el capítulo dos se expone el proceso de fabricación, refuerzo y resultados de los modelos experimentales realizados para ambos elementos estructurales. La obtención de los modelos teóricos forma parte del capítulo tres, comparándose con los resultados experimentales en el cuarto capítulo. Finalmente, en el último capítulo se presentan las conclusiones obtenidas al realizar esta comparativa en el refuerzo de vigas y pilares con fibra de carbono. This work refers to the theoretical and experimental study of the behavior of CFRP reinforced concrete columns and beams. The analysis was done considering that the pillars are reinforced by CFRP wrapping technique, resulting in a confinement effect. The beams are reinforced by the addition of bars of the same material, with shear reinforcements. The objective is to compare the analytical study of this type of reinforcement with experimental results obtained prior to the performance of this document, and draw conclusions for any differences. Notice that experimental models are not part of this study. The tests were performed on circular and square section pillars, evaluating compression fracture of the pieces, having been scaled down with a factor of 2.3. The tests were performed on rectangular section beams, focusing on evaluation of the bending fracture and being scaled down equally, but with a factor of 1:2. The document is divided into four chapters, whose content is set out concisely below. The chapter one or theoretical framework sets out the principles of behavior and types of columns and beams of reinforced concrete, the theoretical basis of its reinforcement and confinement, as well as various existing reinforcement techniques. CFRP technique it’s detailed, comparing and analyzing their advantages and disadvantages. Chapter two describes the process of manufacture, reinforcement and results of experimental models made for both structural elements. Chapter three shows the obtaining of the theoretical models, comparing them with the experimental results in the fourth chapter. Finally, the last chapter presents the conclusions to make this comparison in the strengthening of beams and columns with carbon fiber.
Resumo:
La fisuración iniciada en la superficie de los pavimentos asfálticos constituye uno de los más frecuentes e importantes modos de deterioro que tienen lugar en los firmes bituminosos, como han demostrado los estudios teóricos y experimentales llevados a cabo en la última década. Sin embargo, este mecanismo de fallo no ha sido considerado por los métodos tradicionales de diseño de estos firmes. El concepto de firmes de larga duración se fundamenta en un adecuado seguimiento del proceso de avance en profundidad de estos deterioros y la intervención en el momento más apropiado para conseguir mantenerlos confinados como fisuras de profundidad parcial en la capa superficial más fácilmente accesible y reparable, de manera que pueda prolongarse la durabilidad y funcionalidad del firme y reducir los costes generalizados de su ciclo de vida. Por lo tanto, para la selección de la estrategia óptima de conservación de los firmes resulta esencial disponer de metodologías que posibiliten la identificación precisa in situ de la fisuración descendente, su seguimiento y control, y que además permitan una determinación fiable y con alto rendimiento de su profundidad y extensión. En esta Tesis Doctoral se presentan los resultados obtenidos mediante la investigación sistemática de laboratorio e in situ llevada a cabo para la obtención de datos sobre fisuración descendente en firmes asfálticos y para el estudio de procedimientos de evaluación de la profundidad de este tipo de fisuras empleando técnicas de ultrasonidos. Dichos resultados han permitido comprobar que la metodología no destructiva propuesta, de rápida ejecución, bajo coste y sencilla implementación (principalmente empleada hasta el momento en estructuras metálicas y de hormigón, debido a las dificultades que introduce la naturaleza viscoelástica de los materiales bituminosos) puede ser aplicada con suficiente fiabilidad y repetibilidad sobre firmes asfálticos. Las medidas resultan asimismo independientes del espesor total del firme. Además, permite resolver algunos de los inconvenientes frecuentes que presentan otros métodos de diagnóstico de las fisuras de pavimentos, tales como la extracción de testigos (sistema destructivo, de alto coste y prolongados tiempos de interrupción del tráfico) o algunas otras técnicas no destructivas como las basadas en medidas de deflexiones o el georradar, las cuales no resultan suficientemente precisas para la investigación de fisuras superficiales. Para ello se han realizado varias campañas de ensayos sobre probetas de laboratorio en las que se han estudiado diferentes condiciones empíricas como, por ejemplo, distintos tipos de mezclas bituminosas en caliente (AC, SMA y PA), espesores de firme y adherencias entre capas, temperaturas, texturas superficiales, materiales de relleno y agua en el interior de las grietas, posición de los sensores y un amplio rango de posibles profundidades de fisura. Los métodos empleados se basan en la realización de varias medidas de velocidad o de tiempo de transmisión del pulso ultrasónico sobre una única cara o superficie accesible del material, de manera que resulte posible obtener un coeficiente de transmisión de la señal (mediciones relativas o autocompensadas). Las mediciones se han realizado a bajas frecuencias de excitación mediante dos equipos de ultrasonidos diferentes dotados, en un caso, de transductores de contacto puntual seco (DPC) y siendo en el otro instrumento de contacto plano a través de un material especialmente seleccionado para el acoplamiento (CPC). Ello ha permitido superar algunos de los tradicionales inconvenientes que presenta el uso de los transductores convencionales y no precisar preparación previa de las superficies. La técnica de autocalibración empleada elimina los errores sistemáticos y la necesidad de una calibración local previa, demostrando el potencial de esta tecnología. Los resultados experimentales han sido comparados con modelos teóricos simplificados que simulan la propagación de las ondas ultrasónicas en estos materiales bituminosos fisurados, los cuales han sido deducidos previamente mediante un planteamiento analítico y han permitido la correcta interpretación de dichos datos empíricos. Posteriormente, estos modelos se han calibrado mediante los resultados de laboratorio, proporcionándose sus expresiones matemáticas generalizadas y gráficas para su uso rutinario en las aplicaciones prácticas. Mediante los ensayos con ultrasonidos efectuados en campañas llevadas a cabo in situ, acompañados de la extracción de testigos del firme, se han podido evaluar los modelos propuestos. El máximo error relativo promedio en la estimación de la profundidad de las fisuras al aplicar dichos modelos no ha superado el 13%, con un nivel de confianza del 95%, en el conjunto de todos los ensayos realizados. La comprobación in situ de los modelos ha permitido establecer los criterios y las necesarias recomendaciones para su utilización sobre firmes en servicio. La experiencia obtenida posibilita la integración de esta metodología entre las técnicas de auscultación para la gestión de su conservación. Abstract Surface-initiated cracking of asphalt pavements constitutes one of the most frequent and important types of distress that occur in flexible bituminous pavements, as clearly has been demonstrated in the technical and experimental studies done over the past decade. However, this failure mechanism has not been taken into consideration for traditional methods of flexible pavement design. The concept of long-lasting pavements is based on adequate monitoring of the depth and extent of these deteriorations and on intervention at the most appropriate moment so as to contain them in the surface layer in the form of easily-accessible and repairable partial-depth topdown cracks, thereby prolonging the durability and serviceability of the pavement and reducing the overall cost of its life cycle. Therefore, to select the optimal maintenance strategy for perpetual pavements, it becomes essential to have access to methodologies that enable precise on-site identification, monitoring and control of top-down propagated cracks and that also permit a reliable, high-performance determination of the extent and depth of cracking. This PhD Thesis presents the results of systematic laboratory and in situ research carried out to obtain information about top-down cracking in asphalt pavements and to study methods of depth evaluation of this type of cracking using ultrasonic techniques. These results have demonstrated that the proposed non-destructive methodology –cost-effective, fast and easy-to-implement– (mainly used to date for concrete and metal structures, due to the difficulties caused by the viscoelastic nature of bituminous materials) can be applied with sufficient reliability and repeatability to asphalt pavements. Measurements are also independent of the asphalt thickness. Furthermore, it resolves some of the common inconveniences presented by other methods used to evaluate pavement cracking, such as core extraction (a destructive and expensive procedure that requires prolonged traffic interruptions) and other non-destructive techniques, such as those based on deflection measurements or ground-penetrating radar, which are not sufficiently precise to measure surface cracks. To obtain these results, extensive tests were performed on laboratory specimens. Different empirical conditions were studied, such as various types of hot bituminous mixtures (AC, SMA and PA), differing thicknesses of asphalt and adhesions between layers, varied temperatures, surface textures, filling materials and water within the crack, different sensor positions, as well as an ample range of possible crack depths. The methods employed in the study are based on a series of measurements of ultrasonic pulse velocities or transmission times over a single accessible side or surface of the material that make it possible to obtain a signal transmission coefficient (relative or auto-calibrated readings). Measurements were taken at low frequencies by two short-pulse ultrasonic devices: one equipped with dry point contact transducers (DPC) and the other with flat contact transducers that require a specially-selected coupling material (CPC). In this way, some of the traditional inconveniences presented by the use of conventional transducers were overcome and a prior preparation of the surfaces was not required. The auto-compensating technique eliminated systematic errors and the need for previous local calibration, demonstrating the potential for this technology. The experimental results have been compared with simplified theoretical models that simulate ultrasonic wave propagation in cracked bituminous materials, which had been previously deduced using an analytical approach and have permitted the correct interpretation of the aforementioned empirical results. These models were subsequently calibrated using the laboratory results, providing generalized mathematical expressions and graphics for routine use in practical applications. Through a series of on-site ultrasound test campaigns, accompanied by asphalt core extraction, it was possible to evaluate the proposed models, with differences between predicted crack depths and those measured in situ lower than 13% (with a confidence level of 95%). Thereby, the criteria and the necessary recommendations for their implementation on in-service asphalt pavements have been established. The experience obtained through this study makes it possible to integrate this methodology into the evaluation techniques for pavement management systems.
Resumo:
Attentional control and Information processing speed are central concepts in cognitive psychology and neuropsychology. Functional neuroimaging and neuropsychological assessment have depicted theoretical models considering attention as a complex and non-unitary process. One of its component processes, Attentional set-shifting ability, is commonly assessed using the Trail Making Test (TMT). Performance in the TMT decreases with increasing age in adults, Mild Cognitive Impairment (MCI) and Alzheimer’s Disease (AD). Besides, speed of information processing (SIP) seems to modulate attentional performance. While neural correlates of attentional control have been widely studied, there are few evidences about the neural substrates of SIP in these groups of patients. Different authors have suggested that it could be a property of cerebral white matter, thus, deterioration of the white matter tracts that connect brain regions related to set-shifting may underlie the age-related, MCI and AD decrease in performance. The aim of this study was to study the anatomical dissociation of attentional and speed mechanisms. Diffusion tensor imaging (DTI) provides a unique insight into the cellular integrity of the brain, offering an in vivo view into the microarchitecture of cerebral white matter. At the same time, the study of ageing, characterized by white matter decline, provides the opportunity to study the anatomical substrates speeded or slowed information processing. We hypothesized that FA values would be inversely correlated with time to completion on Parts A and B of the TMT, but not the derived scores B/A and B-A.
Resumo:
This thesis investigates the acoustic properties of microperforated panels as an alternative to passive noise control. The first chapters are devoted to the review of analytical models to obtain the acoustic impedance and absorption coefficient of perforated panels. The use of panels perforated with circular holes or with slits is discussed. The theoretical models are presented and some modifications are proposed to improve the modeling of the physical phenomena occurring at the perforations of the panels. The absorption band is widened through the use of multiple layer microperforated panels and/or the combination of a millimetric panel with a porous layer that can be a fibrous material or a nylon mesh. A commercial micrometric mesh downstream a millimetric panel is proposed as a very efficient and low cost solution for controlling noise in reduced spaces. The simulated annealing algorithm is used in order to optimize the panel construction to provide a maximum of absorption in a determined wide band frequency range. Experiments are carried out at normal sound incidence and plane waves. One example is shown for a double layer microperforated panel subjected to grazing flow. A good agreement is achieved between the theory and the experiments. RESUMEN En esta tesis se investigan las propiedades acústicas de paneles micro perforados como una alternativa al control pasivo del ruido. Los primeros capítulos están dedicados a la revisión de los modelos de análisis para obtener la impedancia acústica y el coeficiente de absorción de los paneles perforados. El uso de paneles perforados con agujeros circulares o con ranuras es discutido. Se presentan diferentes modelos y se proponen algunas modificaciones para mejorar la modelización de los fenómenos físicos que ocurren en las perforaciones. La banda de absorción se ensancha a través del uso de capas múltiples de paneles micro perforados y/o la combinación de un panel de perforaciones milimétricas combinado con una capa porosa que puede ser un material fibroso o una malla de nylon. Se propone el uso de una malla micrométrica detrás de un panel milimétrico como una solución económica y eficiente para el control del ruido en espacios reducidos. El algoritmo de recocido simulado se utiliza con el fin de optimizar la construcción de paneles micro perforados para proporcionar un máximo de absorción en una banda determinada frecuencias. Los experimentos se llevan a cabo en la incidencia normal de sonido y ondas planas. Se muestra un ejemplo de panel micro perforado de doble capa sometido a flujo rasante. Se consigue un buen acuerdo entre la teoría y los experimentos.
Resumo:
Las Tecnologías de la Información y la Comunicación en general e Internet en particular han supuesto una revolución en nuestra forma de comunicarnos, relacionarnos, producir, comprar y vender acortando tiempo y distancias entre proveedores y consumidores. A la paulatina penetración del ordenador, los teléfonos inteligentes y la banda ancha fija y/o móvil ha seguido un mayor uso de estas tecnologías entre ciudadanos y empresas. El comercio electrónico empresa–consumidor (B2C) alcanzó en 2010 en España un volumen de 9.114 millones de euros, con un incremento del 17,4% respecto al dato registrado en 2009. Este crecimiento se ha producido por distintos hechos: un incremento en el porcentaje de internautas hasta el 65,1% en 2010 de los cuales han adquirido productos o servicios a través de la Red un 43,1% –1,6 puntos porcentuales más respecto a 2010–. Por otra parte, el gasto medio por comprador ha ascendido a 831€ en 2010, lo que supone un incremento del 10,9% respecto al año anterior. Si segmentamos a los compradores según por su experiencia anterior de compra podemos encontrar dos categorías: el comprador novel –que adquirió por primera vez productos o servicios en 2010– y el comprador constante –aquel que había adquirido productos o servicios en 2010 y al menos una vez en años anteriores–. El 85,8% de los compradores se pueden considerar como compradores constantes: habían comprado en la Red en 2010, pero también lo habían hecho anteriormente. El comprador novel tiene un perfil sociodemográfico de persona joven de entre 15–24 años, con estudios secundarios, de clase social media y media–baja, estudiante no universitario, residente en poblaciones pequeñas y sigue utilizando fórmulas de pago como el contra–reembolso (23,9%). Su gasto medio anual ascendió en 2010 a 449€. El comprador constante, o comprador que ya había comprado en Internet anteriormente, tiene un perfil demográfico distinto: estudios superiores, clase alta, trabajador y residente en grandes ciudades, con un comportamiento maduro en la compra electrónica dada su mayor experiencia –utiliza con mayor intensidad canales exclusivos en Internet que no disponen de tienda presencial–. Su gasto medio duplica al observado en compradores noveles (con una media de 930€ anuales). Por tanto, los compradores constantes suponen una mayoría de los compradores con un gasto medio que dobla al comprador que ha adoptado el medio recientemente. Por consiguiente es de interés estudiar los factores que predicen que un internauta vuelva a adquirir un producto o servicio en la Red. La respuesta a esta pregunta no se ha revelado sencilla. En España, la mayoría de productos y servicios aún se adquieren de manera presencial, con una baja incidencia de las ventas a distancia como la teletienda, la venta por catálogo o la venta a través de Internet. Para dar respuesta a las preguntas planteadas se ha investigado desde distintos puntos de vista: se comenzará con un estudio descriptivo desde el punto de vista de la demanda que trata de caracterizar la situación del comercio electrónico B2C en España, poniendo el foco en las diferencias entre los compradores constantes y los nuevos compradores. Posteriormente, la investigación de modelos de adopción y continuidad en el uso de las tecnologías y de los factores que inciden en dicha continuidad –con especial interés en el comercio electrónico B2C–, permiten afrontar el problema desde la perspectiva de las ecuaciones estructurales pudiendo también extraer conclusiones de tipo práctico. Este trabajo sigue una estructura clásica de investigación científica: en el capítulo 1 se introduce el tema de investigación, continuando con una descripción del estado de situación del comercio electrónico B2C en España utilizando fuentes oficiales (capítulo 2). Posteriormente se desarrolla el marco teórico y el estado del arte de modelos de adopción y de utilización de las tecnologías (capítulo 3) y de los factores principales que inciden en la adopción y continuidad en el uso de las tecnologías (capítulo 4). El capítulo 5 desarrolla las hipótesis de la investigación y plantea los modelos teóricos. Las técnicas estadísticas a utilizar se describen en el capítulo 6, donde también se analizan los resultados empíricos sobre los modelos desarrollados en el capítulo 5. El capítulo 7 expone las principales conclusiones de la investigación, sus limitaciones y propone nuevas líneas de investigación. La primera parte corresponde al capítulo 1, que introduce la investigación justificándola desde un punto de vista teórico y práctico. También se realiza una breve introducción a la teoría del comportamiento del consumidor desde una perspectiva clásica. Se presentan los principales modelos de adopción y se introducen los modelos de continuidad de utilización que se estudiarán más detalladamente en el capítulo 3. En este capítulo se desarrollan los objetivos principales y los objetivos secundarios, se propone el mapa mental de la investigación y se planifican en un cronograma los principales hitos del trabajo. La segunda parte corresponde a los capítulos dos, tres y cuatro. En el capítulo 2 se describe el comercio electrónico B2C en España utilizando fuentes secundarias. Se aborda un diagnóstico del sector de comercio electrónico y su estado de madurez en España. Posteriormente, se analizan las diferencias entre los compradores constantes, principal interés de este trabajo, frente a los compradores noveles, destacando las diferencias de perfiles y usos. Para los dos segmentos se estudian aspectos como el lugar de acceso a la compra, la frecuencia de compra, los medios de pago utilizados o las actitudes hacia la compra. El capítulo 3 comienza desarrollando los principales conceptos sobre la teoría del comportamiento del consumidor, para continuar estudiando los principales modelos de adopción de tecnología existentes, analizando con especial atención su aplicación en comercio electrónico. Posteriormente se analizan los modelos de continuidad en el uso de tecnologías (Teoría de la Confirmación de Expectativas; Teoría de la Justicia), con especial atención de nuevo a su aplicación en el comercio electrónico. Una vez estudiados los principales modelos de adopción y continuidad en el uso de tecnologías, el capítulo 4 analiza los principales factores que se utilizan en los modelos: calidad, valor, factores basados en la confirmación de expectativas –satisfacción, utilidad percibida– y factores específicos en situaciones especiales –por ejemplo, tras una queja– como pueden ser la justicia, las emociones o la confianza. La tercera parte –que corresponde al capítulo 5– desarrolla el diseño de la investigación y la selección muestral de los modelos. En la primera parte del capítulo se enuncian las hipótesis –que van desde lo general a lo particular, utilizando los factores específicos analizados en el capítulo 4– para su posterior estudio y validación en el capítulo 6 utilizando las técnicas estadísticas apropiadas. A partir de las hipótesis, y de los modelos y factores estudiados en los capítulos 3 y 4, se definen y vertebran dos modelos teóricos originales que den respuesta a los retos de investigación planteados en el capítulo 1. En la segunda parte del capítulo se diseña el trabajo empírico de investigación definiendo los siguientes aspectos: alcance geográfico–temporal, tipología de la investigación, carácter y ambiente de la investigación, fuentes primarias y secundarias utilizadas, técnicas de recolección de datos, instrumentos de medida utilizados y características de la muestra utilizada. Los resultados del trabajo de investigación constituyen la cuarta parte de la investigación y se desarrollan en el capítulo 6, que comienza analizando las técnicas estadísticas basadas en Modelos de Ecuaciones Estructurales. Se plantean dos alternativas, modelos confirmatorios correspondientes a Métodos Basados en Covarianzas (MBC) y modelos predictivos. De forma razonada se eligen las técnicas predictivas dada la naturaleza exploratoria de la investigación planteada. La segunda parte del capítulo 6 desarrolla el análisis de los resultados de los modelos de medida y modelos estructurales construidos con indicadores formativos y reflectivos y definidos en el capítulo 4. Para ello se validan, sucesivamente, los modelos de medida y los modelos estructurales teniendo en cuenta los valores umbrales de los parámetros estadísticos necesarios para la validación. La quinta parte corresponde al capítulo 7, que desarrolla las conclusiones basándose en los resultados del capítulo 6, analizando los resultados desde el punto de vista de las aportaciones teóricas y prácticas, obteniendo conclusiones para la gestión de las empresas. A continuación, se describen las limitaciones de la investigación y se proponen nuevas líneas de estudio sobre distintos temas que han ido surgiendo a lo largo del trabajo. Finalmente, la bibliografía recoge todas las referencias utilizadas a lo largo de este trabajo. Palabras clave: comprador constante, modelos de continuidad de uso, continuidad en el uso de tecnologías, comercio electrónico, B2C, adopción de tecnologías, modelos de adopción tecnológica, TAM, TPB, IDT, UTAUT, ECT, intención de continuidad, satisfacción, confianza percibida, justicia, emociones, confirmación de expectativas, calidad, valor, PLS. ABSTRACT Information and Communication Technologies in general, but more specifically those related to the Internet in particular, have changed the way in which we communicate, relate to one another, produce, and buy and sell products, reducing the time and shortening the distance between suppliers and consumers. The steady breakthrough of computers, Smartphones and landline and/or wireless broadband has been greatly reflected in its large scale use by both individuals and businesses. Business–to–consumer (B2C) e–commerce reached a volume of 9,114 million Euros in Spain in 2010, representing a 17.4% increase with respect to the figure in 2009. This growth is due in part to two different facts: an increase in the percentage of web users to 65.1% en 2010, 43.1% of whom have acquired products or services through the Internet– which constitutes 1.6 percentage points higher than 2010. On the other hand, the average spending by individual buyers rose to 831€ en 2010, constituting a 10.9% increase with respect to the previous year. If we select buyers according to whether or not they have previously made some type of purchase, we can divide them into two categories: the novice buyer–who first made online purchases in 2010– and the experienced buyer: who also made purchases in 2010, but had done so previously as well. The socio–demographic profile of the novice buyer is that of a young person between 15–24 years of age, with secondary studies, middle to lower–middle class, and a non–university educated student who resides in smaller towns and continues to use payment methods such as cash on delivery (23.9%). In 2010, their average purchase grew to 449€. The more experienced buyer, or someone who has previously made purchases online, has a different demographic profile: highly educated, upper class, resident and worker in larger cities, who exercises a mature behavior when making online purchases due to their experience– this type of buyer frequently uses exclusive channels on the Internet that don’t have an actual store. His or her average purchase doubles that of the novice buyer (with an average purchase of 930€ annually.) That said, the experienced buyers constitute the majority of buyers with an average purchase that doubles that of novice buyers. It is therefore of interest to study the factors that help to predict whether or not a web user will buy another product or use another service on the Internet. The answer to this question has proven not to be so simple. In Spain, the majority of goods and services are still bought in person, with a low amount of purchases being made through means such as the Home Shopping Network, through catalogues or Internet sales. To answer the questions that have been posed here, an investigation has been conducted which takes into consideration various viewpoints: it will begin with a descriptive study from the perspective of the supply and demand that characterizes the B2C e–commerce situation in Spain, focusing on the differences between experienced buyers and novice buyers. Subsequently, there will be an investigation concerning the technology acceptance and continuity of use of models as well as the factors that have an effect on their continuity of use –with a special focus on B2C electronic commerce–, which allows for a theoretic approach to the problem from the perspective of the structural equations being able to reach practical conclusions. This investigation follows the classic structure for a scientific investigation: the subject of the investigation is introduced (Chapter 1), then the state of the B2C e–commerce in Spain is described citing official sources of information (Chapter 2), the theoretical framework and state of the art of technology acceptance and continuity models are developed further (Chapter 3) and the main factors that affect their acceptance and continuity (Chapter 4). Chapter 5 explains the hypothesis behind the investigation and poses the theoretical models that will be confirmed or rejected partially or completely. In Chapter 6, the technical statistics that will be used are described briefly as well as an analysis of the empirical results of the models put forth in Chapter 5. Chapter 7 explains the main conclusions of the investigation, its limitations and proposes new projects. First part of the project, chapter 1, introduces the investigation, justifying it from a theoretical and practical point of view. It is also a brief introduction to the theory of consumer behavior from a standard perspective. Technology acceptance models are presented and then continuity and repurchase models are introduced, which are studied more in depth in Chapter 3. In this chapter, both the main and the secondary objectives are developed through a mind map and a timetable which highlights the milestones of the project. The second part of the project corresponds to Chapters Two, Three and Four. Chapter 2 describes the B2C e–commerce in Spain from the perspective of its demand, citing secondary official sources. A diagnosis concerning the e–commerce sector and the status of its maturity in Spain is taken on, as well as the barriers and alternative methods of e–commerce. Subsequently, the differences between experienced buyers, which are of particular interest to this project, and novice buyers are analyzed, highlighting the differences between their profiles and their main transactions. In order to study both groups, aspects such as the place of purchase, frequency with which online purchases are made, payment methods used and the attitudes of the purchasers concerning making online purchases are taken into consideration. Chapter 3 begins by developing the main concepts concerning consumer behavior theory in order to continue the study of the main existing acceptance models (among others, TPB, TAM, IDT, UTAUT and other models derived from them) – paying special attention to their application in e–commerce–. Subsequently, the models of technology reuse are analyzed (CDT, ECT; Theory of Justice), focusing again specifically on their application in e–commerce. Once the main technology acceptance and reuse models have been studied, Chapter 4 analyzes the main factors that are used in these models: quality, value, factors based on the contradiction of expectations/failure to meet expectations– satisfaction, perceived usefulness– and specific factors pertaining to special situations– for example, after receiving a complaint justice, emotions or confidence. The third part– which appears in Chapter 5– develops the plan for the investigation and the sample selection for the models that have been designed. In the first section of the Chapter, the hypothesis is presented– beginning with general ideas and then becoming more specific, using the detailed factors that were analyzed in Chapter 4– for its later study and validation in Chapter 6– as well as the corresponding statistical factors. Based on the hypothesis and the models and factors that were studied in Chapters 3 and 4, two original theoretical models are defined and organized in order to answer the questions posed in Chapter 1. In the second part of the Chapter, the empirical investigation is designed, defining the following aspects: geographic–temporal scope, type of investigation, nature and setting of the investigation, primary and secondary sources used, data gathering methods, instruments according to the extent of their use and characteristics of the sample used. The results of the project constitute the fourth part of the investigation and are developed in Chapter 6, which begins analyzing the statistical techniques that are based on the Models of Structural Equations. Two alternatives are put forth: confirmatory models which correspond to Methods Based on Covariance (MBC) and predictive models– Methods Based on Components–. In a well–reasoned manner, the predictive techniques are chosen given the explorative nature of the investigation. The second part of Chapter 6 explains the results of the analysis of the measurement models and structural models built by the formative and reflective indicators defined in Chapter 4. In order to do so, the measurement models and the structural models are validated one by one, while keeping in mind the threshold values of the necessary statistic parameters for their validation. The fifth part corresponds to Chapter 7 which explains the conclusions of the study, basing them on the results found in Chapter 6 and analyzing them from the perspective of the theoretical and practical contributions, and consequently obtaining conclusions for business management. The limitations of the investigation are then described and new research lines about various topics that came up during the project are proposed. Lastly, all of the references that were used during the project are listed in a final bibliography. Key Words: constant buyer, repurchase models, continuity of use of technology, e–commerce, B2C, technology acceptance, technology acceptance models, TAM, TPB, IDT, UTAUT, ECT, intention of repurchase, satisfaction, perceived trust/confidence, justice, feelings, the contradiction of expectations, quality, value, PLS.
Resumo:
The refractive index changes induced by swift ion-beam irradiation in silica have been measured either by spectroscopic ellipsometry or through the effective indices of the optical modes propagating through the irradiated structure. The optical response has been analyzed by considering an effective homogeneous medium to simulate the nanostructured irradiated system consisting of cylindrical tracks, associated to the ion impacts, embedded into a virgin material. The role of both, irradiation fluence and stopping power, has been investigated. Above a certain electronic stopping power threshold (∼2.5 keV/nm), every ion impact creates an axial region around the trajectory with a fixed refractive index (around n = 1.475) corresponding to a certain structural phase that is independent of stopping power. The results have been compared with previous data measured by means of infrared spectroscopy and small-angle X-ray scattering; possible mechanisms and theoretical models are discussed.
Resumo:
Stereo video techniques are effective for estimating the space–time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space–time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space–time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space–time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.
Resumo:
La acústica arquitectónica es la rama de la acústica que se dedica al estudio y control del campo sonoro en espacios destinados a la música y la palabra. Esta ciencia tiene la finalidad de recrear las mejores condiciones de escucha en una sala. Las cualidades acústicas que debe tener un espacio varían en función de diversos factores, como el tipo de señal reproducida, la actividad que se va a desarrollar en la sala o incluso los gustos, preferencias y costumbres del público. El acondicionamiento acústico es el proceso destinado a imprimir ese carácter acústico a la sala por medio de materiales y sistemas con propiedades absorbentes y difusoras. El trabajo está dedicado íntegramente al estudio de sistemas absorbentes y difusores para el acondicionamiento acústico de salas y pretende ser una guía para el diseño creativo de este tipo de sistemas. El trabajo incluye una parte teórica, en la que se desarrollan los conceptos sobre acústica de salas, absorbentes y difusores; y otra práctica, que incluye el diseño de un sistema acústico mixto absorbente-difusor, las correspondientes medidas de absorción y difusión y las simulaciones. Para estas últimas se emplean tanto modelos teóricos como software de predicción del campo sonoro por elementos de contorno (BEM). ABSTRACT. Architectural acoustics is the branch of acoustics that studies the sound field and his control in spaces devoted to music and speech. This science is aimed at reproducing the best listening conditions in a room. The acoustic qualities a enclosure must have vary depending upon several factors, as type of reproduced signal, the kind of activity in the room or even the preferences and customs of audience. Acoustic conditioning is the process that seeks to impress that acoustic character to the room by means of materials and devices with absorbent and diffusion properties. The monograph is entirely aimed at studying absorbent and diffusing devices for acoustic room conditioning and tries to be a guide for creative design of this type of devices. This monograph contains part of theory, which develops the concepts about room acoustics, absorbers and diffusers; and an experimental part, which includes the design of an hybrid absorber-diffuser acoustic device, the corresponding measures of absorption and diffusion and the simulations. For the last are used theoretical models and boundary elements (BEM) software for sound field prediction.
Resumo:
Algebraic topology (homology) is used to analyze the state of spiral defect chaos in both laboratory experiments and numerical simulations of Rayleigh-Bénard convection. The analysis reveals topological asymmetries that arise when non-Boussinesq effects are present. The asymmetries are found in different flow fields in the simulations and are robust to substantial alterations to flow visualization conditions in the experiment. However, the asymmetries are not observable using conventional statistical measures. These results suggest homology may provide a new and general approach for connecting spatiotemporal observations of chaotic or turbulent patterns to theoretical models.
Resumo:
As a consequence of cinema screens being placed in front of screen-speakers, a reduction in sound quality has been noticed. Cinema screens not only let the sound go through them, but also absorb a small amount of it and reflect the sound which impacts on the screen to the back, coming forward again in case it impacts on the loudspeaker. This backwards reflection in addition to the signal coming from the loudspeaker can lead to constructive or destructive interference at certain frequencies which usually results in comb filtering. In this project, this effect has been studied through researching amongst various data sheet provided by different manufacturers, acoustical measurements completed in the large anechoic chamber of the ISVR and some theoretical models developed with MatLab software. If results obtained with MatLab are accurate enough in comparison to the real measurements taken in the anechoic chamber this would lead to a good way to predict which would be the attenuation added to the system at each frequency, given that not all manufacturers provide an attenuation curve, but only an average attenuation. This average attenuation might be useless as sound waves have different wavelengths and its propagation through partitions varies. In fact, sound is composed by high and low frequencies, where high frequencies are characterised by a small wavelength which is usually easier to attenuate than low frequencies that characterised by bigger wavelengths. Furthermore, this information would be of great value to both screen manufacturers, who could offer a much more precise data in their data sheets; and customers, who would have a great amount of information to their disposal before purchasing and installing anything in their cinemas, being able to know by themselves which screen or loudspeaker should be best to meet their expectative. RESUMEN. La aparición de la digitalización de las bandas sonoras para las películas hace posible la mejora en la calidad de sonido de los cines. Sin embargo, un aspecto a tener en cuenta en esta calidad del sonido es la transmisión de éste a través de la pantalla, ya que normalmente tras ella se encuentran situados los altavoces. Las propiedades acústicas varían dependiendo del tipo de pantalla que se utilice, además de haber poca información a la que acceder para poder valorar su comportamiento. A lo largo de este proyecto, se analizan tres muestras de pantallas distintas donadas por distintos fabricantes para poder llegar a la conclusión de dependiendo del tipo de pantalla cuál es la distancia óptima a la que localizar la pantalla respecto al altavoz y con qué inclinación. Dicho análisis se realizó en la cámara anecoica del ISVR (University of Southampton) mediante la construcción de un marco de madera de 2x2 m en el que tensar las pantallas de cine, y un altavoz cuyo comportamiento sea el más similar al de los altavoces de pantalla reales. Los datos se captaron mediante cuatro micrófonos colocados en posiciones distintas y conectados al software Pulse de Brüel & Kjær, a través del cual se obtuvieron las respuestas en frecuencia del altavoz sin pantalla y con ella a diferentes distancias del altavoz. Posteriormente, los datos se analizaron con MatLab donde se calculó la atenuación, el factor de transmisión de la presión (PTF) y el análisis cepstrum. Finalmente, se realizó un modelo teórico del comportamiento de las pantallas perforadas basado en las placas perforadas utilizadas para atenuar el sonido entre distintas habitaciones. Como conclusión se llegó a que las pantallas curvadas son acústicamente más transparentes que las pantallas perforadas que a partir de 6 kHz son más acústicamente opacas. En las pantallas perforadas la atenuación depende del número de perforaciones por unidad de área y el diámetro de éstas. Dicha atenuación se reducirá si se reduce el diámetro de las perforaciones de la pantalla, o si se incrementa la cantidad de perforaciones. Acerca del efecto filtro peine, para obtener la mínima amplitud de éste la pantalla se deberá situar a una distancia entre 15 y 30 cm del altavoz, encontrando a la distancia de 30 cm que la última reflexión analizada a través de Cepstrum llega 5 ms más tarde que la señal directa, por lo cual no debería dañar el sonido ni la claridad del habla.