931 resultados para Recent Structural Models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Organic-inorganic hybrid nanocomposites are widely studied and applied in broad areas because of their ability to combine the flexibility, low density of the organic materials with the hardness, strength, thermal stability, good optical and electronic properties of the inorganic materials. Polydimethylsiloxane (PDMS) due to its excellent elasticity, transparency, and biocompatibility has been extensively employed as the organic host matrix for nanocomposites. For the inorganic component, titanium dioxide and barium titanate are broadly explored as they possess outstanding physical, optical and electronic properties. In our experiment, PDMS-TiO2 and PDMS-BaTiO3 hybrid nanocomposites were fabricated based on in-situ sol-gel technique. By changing the amount of metal precursors, transparent and homogeneous PDMS-TiO2 and PDMS-BaTiO3 hybrid films with various compositions were obtained. Two structural models of these two types of hybrids were stated and verified by the results of characterization. The structures of the hybrid films were examined by a conjunction of FTIR and FTRaman. The morphologies of the cross-sectional areas of the films were characterized by FESEM. An Ellipsometer and an automatic capacitance meter were utilized to evaluate the refractive index and dielectric constant of these composites respectively. A simultaneous DSC/TGA instrument was applied to measure the thermal properties. For PDMS-TiO2 hybrids, the higher the ratio of titanium precursor added, the higher the refractive index and the dielectric constant of the composites are. The highest values achieved of refractive index and dielectric constant were 1.74 and 15.5 respectively for sample PDMS-TiO2 (1-6). However, when the ratio of titanium precursor to PDMS was as high as 20 to 1, phase separation occurred as evidenced by SEM images, refractive index and dielectric constant decreased. For PDMS-BaTiO3 hybrids, with the increase of barium and titanium precursors in the system, the refractive index and dielectric constant of the composites increased. The highest value was attained in sample PDMS-BaTiO3 (1-6) with a refractive index of 1.6 and a dielectric constant of 12.2. However, phase separation appeared in SEM images for sample PDMS-BaTiO3 (1-8), the refractive index and dielectric constant reduced to lower values. Different compositions of PDMS-TiO2 and PDMS-BaTiO3 hybrid films were annealed at 60 °C and 100 °C, the influences on the refractive index, dielectric constant, and thermal properties were investigated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evaluation of: Noorman M, Hakim S, Kessler E et al. Remodeling of the cardiac sodium channel, connexin43, and plakoglobin at the intercalated disk in patients with arrhythmogenic cardiomyopathy. Heart Rhythm 10(3), 412-419 (2013). Arrhythmogenic cardiomyopathy (AC) is a heart muscle disease characterized by a progressive replacement of the ventricular myocardium with adipose and fibrous tissue. This disease is often associated with mutations in genes encoding desmosomal proteins in the majority of patients. Based on results obtained from recent experimental models, a disturbed distribution of gap junction proteins and cardiac sodium channels may also be observed in AC phenotypes, secondary to desmosomal dysfunction. The study from Noorman et al. examined heart sections from patients diagnosed with AC and performed immunohistochemical analyses of N-cadherin, PKP2, PKG, Cx43 and the cardiac sodium channel NaV1.5. Altered expression/distribution of Cx43, PKG and NaV1.5 was found in most cases of patients with AC. The altered expression and/or distribution of NaV1.5 channels in AC hearts may play a mechanistic role in the arrhythmias leading to sudden cardiac death in AC patients. Thus, NaV1.5 should be considered as a supplemental element in the evaluation of risk stratification and management strategies. However, additional experiments are required to clearly understand the mechanisms leading to AC phenotypes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE To examine the degree to which use of β blockers, statins, and diuretics in patients with impaired glucose tolerance and other cardiovascular risk factors is associated with new onset diabetes. DESIGN Reanalysis of data from the Nateglinide and Valsartan in Impaired Glucose Tolerance Outcomes Research (NAVIGATOR) trial. SETTING NAVIGATOR trial. PARTICIPANTS Patients who at baseline (enrolment) were treatment naïve to β blockers (n=5640), diuretics (n=6346), statins (n=6146), and calcium channel blockers (n=6294). Use of calcium channel blocker was used as a metabolically neutral control. MAIN OUTCOME MEASURES Development of new onset diabetes diagnosed by standard plasma glucose level in all participants and confirmed with glucose tolerance testing within 12 weeks after the increased glucose value was recorded. The relation between each treatment and new onset diabetes was evaluated using marginal structural models for causal inference, to account for time dependent confounding in treatment assignment. RESULTS During the median five years of follow-up, β blockers were started in 915 (16.2%) patients, diuretics in 1316 (20.7%), statins in 1353 (22.0%), and calcium channel blockers in 1171 (18.6%). After adjusting for baseline characteristics and time varying confounders, diuretics and statins were both associated with an increased risk of new onset diabetes (hazard ratio 1.23, 95% confidence interval 1.06 to 1.44, and 1.32, 1.14 to 1.48, respectively), whereas β blockers and calcium channel blockers were not associated with new onset diabetes (1.10, 0.92 to 1.31, and 0.95, 0.79 to 1.13, respectively). CONCLUSIONS Among people with impaired glucose tolerance and other cardiovascular risk factors and with serial glucose measurements, diuretics and statins were associated with an increased risk of new onset diabetes, whereas the effect of β blockers was non-significant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The city of Bath is a World Heritage site and its thermal waters, the Roman Baths and new spa development rely on undisturbed flow of the springs (45 °C). The current investigations provide an improved understanding of the residence times and flow regime as basis for the source protection. Trace gas indicators including the noble gases (helium, neon, argon, krypton and xenon) and chlorofluorocarbons (CFCs), together with a more comprehensive examination of chemical and stable isotope tracers are used to characterise the sources of the thermal water and any modern components. It is shown conclusively by the use of 39Ar that the bulk of the thermal water has been in circulation within the Carboniferous Limestone for at least 1000 years. Other stable isotope and noble gas measurements confirm previous findings and strongly suggest recharge within the Holocene time period (i.e. the last 12 kyr). Measurements of dissolved 85Kr and chlorofluorocarbons constrain previous indications from tritium that a small proportion (<5%) of the thermal water originates from modern leakage into the spring pipe passing through Mesozoic valley fill underlying Bath. This introduces small amounts of O2 into the system, resulting in the Fe precipitation seen in the King’s Spring. Silica geothermometry indicates that the water is likely to have reached a maximum temperature of between 69–99 °C, indicating a most probable maximum circulation depth of ∼3 km, which is in line with recent geological models. The rise to the surface of the water is sufficiently indirect that a temperature loss of >20 °C is incurred. There is overwhelming evidence that the water has evolved within the Carboniferous Limestone formation, although the chemistry alone cannot pinpoint the geometry of the recharge area or circulation route. For a likely residence time of 1–12 kyr, volumetric calculations imply a large storage volume and circulation pathway if typical porosities of the limestone at depth are used, indicating that much of the Bath-Bristol basin must be involved in the water storage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The VirB/D4 type IV secretion system (T4SS) of Agrobacterium tumefaciens functions to transfer substrates to infected plant cells through assembly of a translocation channel and a surface structure termed a T-pilus. This thesis is focused on identifying contributions of VirB10 to substrate transfer and T-pilus formation through a mutational analysis. VirB10 is a bitopic protein with several domains, including a: (i) cytoplasmic N-terminus, (ii) single transmembrane (TM) α-helix, (iii) proline-rich region (PRR), and (iv) large C-terminal modified β-barrel. I introduced cysteine insertion and substitution mutations throughout the length of VirB10 in order to: (i) test a predicted transmembrane topology, (ii) identify residues/domains contributing to VirB10 stability, oligomerization, and function, and (iii) monitor structural changes accompanying energy activation or substrate translocation. These studies were aided by recent structural resolution of a periplasmic domain of a VirB10 homolog and a ‘core’ complex composed of homologs of VirB10 and two outer membrane associated subunits, VirB7 and VirB9. By use of the substituted cysteine accessibility method (SCAM), I confirmed the bitopic topology of VirB10. Through phenotypic studies of Ala-Cys insertion mutations, I identified “uncoupling” mutations in the TM and β-barrel domains that blocked T-pilus assembly but permitted substrate transfer. I showed that cysteine replacements in the C-terminal periplasmic domain yielded a variety of phenotypes in relation to protein accumulation, oligomerization, substrate transfer, and T-pilus formation. By SCAM, I also gained further evidence that VirB10 adopts different structural states during machine biogenesis. Finally, I showed that VirB10 supports substrate transfer even when its TM domain is extensively mutagenized or substituted with heterologous TM domains. By contrast, specific residues most probably involved in oligomerization of the TM domain are required for biogenesis of the T-pilus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Penninic nappes in the Swiss Alps formed during continental collision between the Adriatic and European plates in Cenozoic times. Although intensely studied, the finite geometry of the basement-bearing Penninic nappes in western Switzerland has remained a matter of debate for decades (e.g., “Siviez-Mischabel dilemma”) and the paleogeographic origin of various nappes has been disputed. Here, we present new structural data for the central part of the Penninic Bernard nappe complex, which contains pre-Permian basement and Permo-Mesozoic metasedimentary units. Our lithological and structural observations indicate that the discrepancy between the different structural models proposed for the Bernard nappe complex can be explained by a lateral discontinuity. In the west, the presence of a Permian graben caused complex isoclinal folding, whereas in the east, the absence of such a graben resulted mainly in imbricate thrusting. The overall geometry of the Bernard nappe complex is the result of three main deformation phases: (1) detachment of Mesozoic cover sediments along Triassic evaporites (Evolène phase) during the early stages of collision, (2) Eocene top-to-the-N(NW) nappe stacking (Anniviers phase), and (3) subsequent backfolding and backshearing (Mischabel phase). The southward localized backshearing is key to understand the structural position and paleogeographic origin of units, such as the Frilihorn and Cimes Blanches “nappes” and the Antrona ophiolites. Based on these observations, we present a new tectonic model for the entire Penninic region of western Switzerland and discuss this model in terms of continental collision zone processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Atazanavir boosted with ritonavir (ATV/r) and efavirenz (EFV) are both recommended as first-line therapies for HIV-infected patients. We compared the 2 therapies for virologic efficacy and immune recovery. Methods: We included all treatment-naïve patients in the Swiss HIV Cohort Study starting therapy after May 2003 with either ATV/r or EFV and a backbone of tenofovir and either emtricitabine or lamivudine. We used Cox models to assess time to virologic failure and repeated measures models to assess the change in CD4 cell counts over time. All models were fit as marginal structural models using both point of treatment and censoring weights. Intent-to-treat and various as-treated analyses were carried out: In the latter, patients were censored at their last recorded measurement if they changed therapy or if they were no longer adherent to therapy. Results: Patients starting EFV (n = 1,097) and ATV/r (n = 384) were followed for a median of 35 and 37 months, respectively. During follow-up, 51% patients on EFV and 33% patients on ATV/r remained adherent and made no change to their first-line therapy. Although intent-to-treat analyses suggest virologic failure was more likely with ATV/r, there was no evidence for this disadvantage in patients who adhered to first-line therapy. Patients starting ATV/r had a greater increase in CD4 cell count during the first year of therapy, but this advantage disappeared after one year. Conclusions: In this observational study, there was no good evidence of any intrinsic advantage for one therapy over the other, consistent with earlier clinical trials. Differences between therapies may arise in a clinical setting because of differences in adherence to therapy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sodium-proton antiporters rapidly exchange protons and sodium ions across the membrane to regulate intracellular pH, cell volume, and sodium concentration. How ion binding and release is coupled to the conformational changes associated with transport is not clear. Here, we report a crystal form of the prototypical sodium-proton antiporter NhaA from Escherichia coli in which the protein is seen as a dimer. In this new structure, we observe a salt bridge between an essential aspartic acid (Asp163) and a conserved lysine (Lys300). An equivalent salt bridge is present in the homologous transporter NapA, but not in the only other known crystal structure of NhaA, which provides the foundation of most existing structural models of electrogenic sodium-proton antiport. Molecular dynamics simulations show that the stability of the salt bridge is weakened by sodium ions binding to Asp164 and the neighboring Asp163. This suggests that the transport mechanism involves Asp163 switching between forming a salt bridge with Lys300 and interacting with the sodium ion. pKa calculations suggest that Asp163 is highly unlikely to be protonated when involved in the salt bridge. As it has been previously suggested that Asp163 is one of the two residues through which proton transport occurs, these results have clear implications to the current mechanistic models of sodium-proton antiport in NhaA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE To determine the effect of nonadherence to antiretroviral therapy (ART) on virologic failure and mortality in naive individuals starting ART. DESIGN Prospective observational cohort study. METHODS Eligible individuals enrolled in the Swiss HIV Cohort Study, started ART between 2003 and 2012, and provided adherence data on at least one biannual clinical visit. Adherence was defined as missed doses (none, one, two, or more than two) and percentage adherence (>95, 90-95, and <90) in the previous 4 weeks. Inverse probability weighting of marginal structural models was used to estimate the effect of nonadherence on viral failure (HIV-1 viral load >500 copies/ml) and mortality. RESULTS Of 3150 individuals followed for a median 4.7 years, 480 (15.2%) experienced viral failure and 104 (3.3%) died, 1155 (36.6%) reported missing one dose, 414 (13.1%) two doses and, 333 (10.6%) more than two doses of ART. The risk of viral failure increased with each missed dose (one dose: hazard ratio [HR] 1.15, 95% confidence interval 0.79-1.67; two doses: 2.15, 1.31-3.53; more than two doses: 5.21, 2.96-9.18). The risk of death increased with more than two missed doses (HR 4.87, 2.21-10.73). Missing one to two doses of ART increased the risk of viral failure in those starting once-daily (HR 1.67, 1.11-2.50) compared with those starting twice-daily regimens (HR 0.99, 0.64-1.54, interaction P = 0.09). Consistent results were found for percentage adherence. CONCLUSION Self-report of two or more missed doses of ART is associated with an increased risk of both viral failure and death. A simple adherence question helps identify patients at risk for negative clinical outcomes and offers opportunities for intervention.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of this paper is to establish if unemployment insurance policies are more generous in Europe than in the United States, and by how much. We take the examples of France and one particular American state, Ohio, and use the methodology of Pallage, Scruggs and Zimmermann (2008) to find a unique parameter value for each region that fully characterizes the generosity of the system. These two values can then be used in structural models that compare the regions, for example to explain the differences in unemployment rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To better understand the mechanisms of how the human prostacyclin receptor (1P) mediates vasodilation and platelet anti-aggregation through Gs protein coupling, a strategy integrating multiple approaches including high resolution NMR experiments, synthetic peptide, fluorescence spectroscopy, molecular modeling, and recombinant protein was developed and used to characterize the structure/function relationship of important segments and residues of the IP receptor and the α-subunit of the Gs protein (Gαs). The first (iLP1) and third (iLP3) intracellular loops of the IP receptor, as well as the Gαs C-terminal domain, relevant to the Gs-mediated IP receptor signaling, were first identified by observation of the effects of the mini gene-expressed corresponding protein segments in HEK293 cells which co-expressed the receptor and Gαs. Evidence of the IP iLP1 domain interacted with the Gαs C-terminal domain was observed by fluorescence and NMR spectroscopic studies using a constrained synthetic peptide, which mimicked the IP iLP1 domain, and the synthetic peptide, which mimicked Gαs C-terminal domain. The solution structural models and the peptide-peptide interaction of the two synthetic protein segments were determined by high resolution NMR spectroscopy. The important residues in the corresponding domains of the IP receptor and the Gαs predicted by NMR chemical shift mapping were used to guide the identification of their protein-protein interaction in cells. A profile of the residues Arg42 - Ala48 of the IP iLP1 domain and the three residues Glu392 ∼ Leu394 of the Gαs C-terminal domain involved in the IP/Gs protein coupling were confirmed by recombinant proteins. The data revealed an intriguing speculation on the mechanisms of how the signal of the ligand-activated IP receptor is transmitted to the Gs protein in regulating vascular functions and homeostasis, and also provided substantial insights into other prostanoid receptor signaling. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Controversy has surrounded the issue of whether mantle plume activity was responsible for Pangaean continental rifting and massive flood volcanism (resulting in the Central Atlantic Magmatic Province or CAMP, emplaced around 200 Ma) preceding the opening of the central Atlantic Ocean in the Early Mesozoic. Our new Sr-Nd-Pb isotopic and trace element data for the oldest basalts sampled from central Atlantic oceanic crust by deep-sea drilling show that oceanic crust generated from about 160 to 120 Ma displays clear isotopic and chemical signals of plume contamination (e.g., 87Sr/86Sr(i) = 0.7032-0.7036, epsilonNd(t) =+6.2 to +8.2, incompatible element patterns with positive Nb anomalies), but these signals are muted or absent in crust generated between 120 and 80 Ma, which resembles young Atlantic normal mid-ocean ridge basalt. The plume-affected pre-120 Ma Atlantic crustal basalts are isotopically similar to lavas from the Ontong Java Plateau, and may represent one isotopic end-member for CAMP basalts. The strongest plume signature is displayed near the center of CAMP magmatism but the hotspots presently located nearest this location in the mantle reference frame do not appear to be older than latest Cretaceous and are isotopically distinct from the oldest Atlantic crust. The evidence for widespread plume contamination of the nascent Atlantic upper mantle, combined with a lack of evidence for a long-lived volcanic chain associated with this plume, leads us to propose that the enriched signature of early Atlantic crust and possibly the eruption of the CAMP were caused by a relatively short-lived, but large volume plume feature that was not rooted at a mantle boundary layer. Such a phenomenon has been predicted by recent numerical models of mantle circulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The fate of subducted sediment and the extent to which it is dehydrated and/or melted before incorporation into arc lavas has profound implications for the thermo-mechanical nature of the mantle wedge and models for crustal evolution. In order to address these issues, we have undertaken the first measurements of 10Be and light elements in lavas from the Tonga-Kermadec arc and the sediment profile at DSDP site 204 outboard of the trench. The 10Be/9Be ratios in the Tonga lavas are lower than predicted from flux models but can be explained if (a) previously estimated sediment contributions are too high by a factor of 2-10, (b) the top 1-22 m of the incoming sediment is accreted, (c) large amounts of sediment erosion are proposed, or (d) the sediment component takes several Myr longer than the subducting plate to reach the magma source region beneath Tonga. The lavas form negative Th/Be-Li/Be arrays that extend from a depleted mantle source composition to lower Th/Be and Li/Be ratios than that of the bulk sediment. Thus, these arrays are not easily explained by bulk sediment addition and, using partition coefficients derived from experiments on the in-coming sediment, we show that they are also unlikely to result from fluid released during dehydration of the sediment (or altered oceanic crust). However, partial melts of the dehydrated sediment residue formed at ~800 °C during the breakdown of amphibole +/- plagioclase and in the absence of cordierite have significantly lowered Th/Be ratios. The lava arrays can be successfully modelled as 10-15% partial melts of depleted mantle after it has been enriched by the addition of 0.2-2% of these partial melts. Phase relations suggest that this requires that the top of the subducting crust reaches temperatures of ~800 °C by the time it attains ~ 80 km depth which is in excellent agreement with the results of recent numerical models incorporating a temperature-dependent mantle viscosity. Under these conditions the wet basalt solidus is also crossed yet there is no recognisable eclogitic signal in the lavas suggesting that on-going dehydration or strong thermal gradients in the upper part of the subducting plate inhibit partialmelting of the altered oceanic crust.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Las Tecnologías de la Información y la Comunicación en general e Internet en particular han supuesto una revolución en nuestra forma de comunicarnos, relacionarnos, producir, comprar y vender acortando tiempo y distancias entre proveedores y consumidores. A la paulatina penetración del ordenador, los teléfonos inteligentes y la banda ancha fija y/o móvil ha seguido un mayor uso de estas tecnologías entre ciudadanos y empresas. El comercio electrónico empresa–consumidor (B2C) alcanzó en 2010 en España un volumen de 9.114 millones de euros, con un incremento del 17,4% respecto al dato registrado en 2009. Este crecimiento se ha producido por distintos hechos: un incremento en el porcentaje de internautas hasta el 65,1% en 2010 de los cuales han adquirido productos o servicios a través de la Red un 43,1% –1,6 puntos porcentuales más respecto a 2010–. Por otra parte, el gasto medio por comprador ha ascendido a 831€ en 2010, lo que supone un incremento del 10,9% respecto al año anterior. Si segmentamos a los compradores según por su experiencia anterior de compra podemos encontrar dos categorías: el comprador novel –que adquirió por primera vez productos o servicios en 2010– y el comprador constante –aquel que había adquirido productos o servicios en 2010 y al menos una vez en años anteriores–. El 85,8% de los compradores se pueden considerar como compradores constantes: habían comprado en la Red en 2010, pero también lo habían hecho anteriormente. El comprador novel tiene un perfil sociodemográfico de persona joven de entre 15–24 años, con estudios secundarios, de clase social media y media–baja, estudiante no universitario, residente en poblaciones pequeñas y sigue utilizando fórmulas de pago como el contra–reembolso (23,9%). Su gasto medio anual ascendió en 2010 a 449€. El comprador constante, o comprador que ya había comprado en Internet anteriormente, tiene un perfil demográfico distinto: estudios superiores, clase alta, trabajador y residente en grandes ciudades, con un comportamiento maduro en la compra electrónica dada su mayor experiencia –utiliza con mayor intensidad canales exclusivos en Internet que no disponen de tienda presencial–. Su gasto medio duplica al observado en compradores noveles (con una media de 930€ anuales). Por tanto, los compradores constantes suponen una mayoría de los compradores con un gasto medio que dobla al comprador que ha adoptado el medio recientemente. Por consiguiente es de interés estudiar los factores que predicen que un internauta vuelva a adquirir un producto o servicio en la Red. La respuesta a esta pregunta no se ha revelado sencilla. En España, la mayoría de productos y servicios aún se adquieren de manera presencial, con una baja incidencia de las ventas a distancia como la teletienda, la venta por catálogo o la venta a través de Internet. Para dar respuesta a las preguntas planteadas se ha investigado desde distintos puntos de vista: se comenzará con un estudio descriptivo desde el punto de vista de la demanda que trata de caracterizar la situación del comercio electrónico B2C en España, poniendo el foco en las diferencias entre los compradores constantes y los nuevos compradores. Posteriormente, la investigación de modelos de adopción y continuidad en el uso de las tecnologías y de los factores que inciden en dicha continuidad –con especial interés en el comercio electrónico B2C–, permiten afrontar el problema desde la perspectiva de las ecuaciones estructurales pudiendo también extraer conclusiones de tipo práctico. Este trabajo sigue una estructura clásica de investigación científica: en el capítulo 1 se introduce el tema de investigación, continuando con una descripción del estado de situación del comercio electrónico B2C en España utilizando fuentes oficiales (capítulo 2). Posteriormente se desarrolla el marco teórico y el estado del arte de modelos de adopción y de utilización de las tecnologías (capítulo 3) y de los factores principales que inciden en la adopción y continuidad en el uso de las tecnologías (capítulo 4). El capítulo 5 desarrolla las hipótesis de la investigación y plantea los modelos teóricos. Las técnicas estadísticas a utilizar se describen en el capítulo 6, donde también se analizan los resultados empíricos sobre los modelos desarrollados en el capítulo 5. El capítulo 7 expone las principales conclusiones de la investigación, sus limitaciones y propone nuevas líneas de investigación. La primera parte corresponde al capítulo 1, que introduce la investigación justificándola desde un punto de vista teórico y práctico. También se realiza una breve introducción a la teoría del comportamiento del consumidor desde una perspectiva clásica. Se presentan los principales modelos de adopción y se introducen los modelos de continuidad de utilización que se estudiarán más detalladamente en el capítulo 3. En este capítulo se desarrollan los objetivos principales y los objetivos secundarios, se propone el mapa mental de la investigación y se planifican en un cronograma los principales hitos del trabajo. La segunda parte corresponde a los capítulos dos, tres y cuatro. En el capítulo 2 se describe el comercio electrónico B2C en España utilizando fuentes secundarias. Se aborda un diagnóstico del sector de comercio electrónico y su estado de madurez en España. Posteriormente, se analizan las diferencias entre los compradores constantes, principal interés de este trabajo, frente a los compradores noveles, destacando las diferencias de perfiles y usos. Para los dos segmentos se estudian aspectos como el lugar de acceso a la compra, la frecuencia de compra, los medios de pago utilizados o las actitudes hacia la compra. El capítulo 3 comienza desarrollando los principales conceptos sobre la teoría del comportamiento del consumidor, para continuar estudiando los principales modelos de adopción de tecnología existentes, analizando con especial atención su aplicación en comercio electrónico. Posteriormente se analizan los modelos de continuidad en el uso de tecnologías (Teoría de la Confirmación de Expectativas; Teoría de la Justicia), con especial atención de nuevo a su aplicación en el comercio electrónico. Una vez estudiados los principales modelos de adopción y continuidad en el uso de tecnologías, el capítulo 4 analiza los principales factores que se utilizan en los modelos: calidad, valor, factores basados en la confirmación de expectativas –satisfacción, utilidad percibida– y factores específicos en situaciones especiales –por ejemplo, tras una queja– como pueden ser la justicia, las emociones o la confianza. La tercera parte –que corresponde al capítulo 5– desarrolla el diseño de la investigación y la selección muestral de los modelos. En la primera parte del capítulo se enuncian las hipótesis –que van desde lo general a lo particular, utilizando los factores específicos analizados en el capítulo 4– para su posterior estudio y validación en el capítulo 6 utilizando las técnicas estadísticas apropiadas. A partir de las hipótesis, y de los modelos y factores estudiados en los capítulos 3 y 4, se definen y vertebran dos modelos teóricos originales que den respuesta a los retos de investigación planteados en el capítulo 1. En la segunda parte del capítulo se diseña el trabajo empírico de investigación definiendo los siguientes aspectos: alcance geográfico–temporal, tipología de la investigación, carácter y ambiente de la investigación, fuentes primarias y secundarias utilizadas, técnicas de recolección de datos, instrumentos de medida utilizados y características de la muestra utilizada. Los resultados del trabajo de investigación constituyen la cuarta parte de la investigación y se desarrollan en el capítulo 6, que comienza analizando las técnicas estadísticas basadas en Modelos de Ecuaciones Estructurales. Se plantean dos alternativas, modelos confirmatorios correspondientes a Métodos Basados en Covarianzas (MBC) y modelos predictivos. De forma razonada se eligen las técnicas predictivas dada la naturaleza exploratoria de la investigación planteada. La segunda parte del capítulo 6 desarrolla el análisis de los resultados de los modelos de medida y modelos estructurales construidos con indicadores formativos y reflectivos y definidos en el capítulo 4. Para ello se validan, sucesivamente, los modelos de medida y los modelos estructurales teniendo en cuenta los valores umbrales de los parámetros estadísticos necesarios para la validación. La quinta parte corresponde al capítulo 7, que desarrolla las conclusiones basándose en los resultados del capítulo 6, analizando los resultados desde el punto de vista de las aportaciones teóricas y prácticas, obteniendo conclusiones para la gestión de las empresas. A continuación, se describen las limitaciones de la investigación y se proponen nuevas líneas de estudio sobre distintos temas que han ido surgiendo a lo largo del trabajo. Finalmente, la bibliografía recoge todas las referencias utilizadas a lo largo de este trabajo. Palabras clave: comprador constante, modelos de continuidad de uso, continuidad en el uso de tecnologías, comercio electrónico, B2C, adopción de tecnologías, modelos de adopción tecnológica, TAM, TPB, IDT, UTAUT, ECT, intención de continuidad, satisfacción, confianza percibida, justicia, emociones, confirmación de expectativas, calidad, valor, PLS. ABSTRACT Information and Communication Technologies in general, but more specifically those related to the Internet in particular, have changed the way in which we communicate, relate to one another, produce, and buy and sell products, reducing the time and shortening the distance between suppliers and consumers. The steady breakthrough of computers, Smartphones and landline and/or wireless broadband has been greatly reflected in its large scale use by both individuals and businesses. Business–to–consumer (B2C) e–commerce reached a volume of 9,114 million Euros in Spain in 2010, representing a 17.4% increase with respect to the figure in 2009. This growth is due in part to two different facts: an increase in the percentage of web users to 65.1% en 2010, 43.1% of whom have acquired products or services through the Internet– which constitutes 1.6 percentage points higher than 2010. On the other hand, the average spending by individual buyers rose to 831€ en 2010, constituting a 10.9% increase with respect to the previous year. If we select buyers according to whether or not they have previously made some type of purchase, we can divide them into two categories: the novice buyer–who first made online purchases in 2010– and the experienced buyer: who also made purchases in 2010, but had done so previously as well. The socio–demographic profile of the novice buyer is that of a young person between 15–24 years of age, with secondary studies, middle to lower–middle class, and a non–university educated student who resides in smaller towns and continues to use payment methods such as cash on delivery (23.9%). In 2010, their average purchase grew to 449€. The more experienced buyer, or someone who has previously made purchases online, has a different demographic profile: highly educated, upper class, resident and worker in larger cities, who exercises a mature behavior when making online purchases due to their experience– this type of buyer frequently uses exclusive channels on the Internet that don’t have an actual store. His or her average purchase doubles that of the novice buyer (with an average purchase of 930€ annually.) That said, the experienced buyers constitute the majority of buyers with an average purchase that doubles that of novice buyers. It is therefore of interest to study the factors that help to predict whether or not a web user will buy another product or use another service on the Internet. The answer to this question has proven not to be so simple. In Spain, the majority of goods and services are still bought in person, with a low amount of purchases being made through means such as the Home Shopping Network, through catalogues or Internet sales. To answer the questions that have been posed here, an investigation has been conducted which takes into consideration various viewpoints: it will begin with a descriptive study from the perspective of the supply and demand that characterizes the B2C e–commerce situation in Spain, focusing on the differences between experienced buyers and novice buyers. Subsequently, there will be an investigation concerning the technology acceptance and continuity of use of models as well as the factors that have an effect on their continuity of use –with a special focus on B2C electronic commerce–, which allows for a theoretic approach to the problem from the perspective of the structural equations being able to reach practical conclusions. This investigation follows the classic structure for a scientific investigation: the subject of the investigation is introduced (Chapter 1), then the state of the B2C e–commerce in Spain is described citing official sources of information (Chapter 2), the theoretical framework and state of the art of technology acceptance and continuity models are developed further (Chapter 3) and the main factors that affect their acceptance and continuity (Chapter 4). Chapter 5 explains the hypothesis behind the investigation and poses the theoretical models that will be confirmed or rejected partially or completely. In Chapter 6, the technical statistics that will be used are described briefly as well as an analysis of the empirical results of the models put forth in Chapter 5. Chapter 7 explains the main conclusions of the investigation, its limitations and proposes new projects. First part of the project, chapter 1, introduces the investigation, justifying it from a theoretical and practical point of view. It is also a brief introduction to the theory of consumer behavior from a standard perspective. Technology acceptance models are presented and then continuity and repurchase models are introduced, which are studied more in depth in Chapter 3. In this chapter, both the main and the secondary objectives are developed through a mind map and a timetable which highlights the milestones of the project. The second part of the project corresponds to Chapters Two, Three and Four. Chapter 2 describes the B2C e–commerce in Spain from the perspective of its demand, citing secondary official sources. A diagnosis concerning the e–commerce sector and the status of its maturity in Spain is taken on, as well as the barriers and alternative methods of e–commerce. Subsequently, the differences between experienced buyers, which are of particular interest to this project, and novice buyers are analyzed, highlighting the differences between their profiles and their main transactions. In order to study both groups, aspects such as the place of purchase, frequency with which online purchases are made, payment methods used and the attitudes of the purchasers concerning making online purchases are taken into consideration. Chapter 3 begins by developing the main concepts concerning consumer behavior theory in order to continue the study of the main existing acceptance models (among others, TPB, TAM, IDT, UTAUT and other models derived from them) – paying special attention to their application in e–commerce–. Subsequently, the models of technology reuse are analyzed (CDT, ECT; Theory of Justice), focusing again specifically on their application in e–commerce. Once the main technology acceptance and reuse models have been studied, Chapter 4 analyzes the main factors that are used in these models: quality, value, factors based on the contradiction of expectations/failure to meet expectations– satisfaction, perceived usefulness– and specific factors pertaining to special situations– for example, after receiving a complaint justice, emotions or confidence. The third part– which appears in Chapter 5– develops the plan for the investigation and the sample selection for the models that have been designed. In the first section of the Chapter, the hypothesis is presented– beginning with general ideas and then becoming more specific, using the detailed factors that were analyzed in Chapter 4– for its later study and validation in Chapter 6– as well as the corresponding statistical factors. Based on the hypothesis and the models and factors that were studied in Chapters 3 and 4, two original theoretical models are defined and organized in order to answer the questions posed in Chapter 1. In the second part of the Chapter, the empirical investigation is designed, defining the following aspects: geographic–temporal scope, type of investigation, nature and setting of the investigation, primary and secondary sources used, data gathering methods, instruments according to the extent of their use and characteristics of the sample used. The results of the project constitute the fourth part of the investigation and are developed in Chapter 6, which begins analyzing the statistical techniques that are based on the Models of Structural Equations. Two alternatives are put forth: confirmatory models which correspond to Methods Based on Covariance (MBC) and predictive models– Methods Based on Components–. In a well–reasoned manner, the predictive techniques are chosen given the explorative nature of the investigation. The second part of Chapter 6 explains the results of the analysis of the measurement models and structural models built by the formative and reflective indicators defined in Chapter 4. In order to do so, the measurement models and the structural models are validated one by one, while keeping in mind the threshold values of the necessary statistic parameters for their validation. The fifth part corresponds to Chapter 7 which explains the conclusions of the study, basing them on the results found in Chapter 6 and analyzing them from the perspective of the theoretical and practical contributions, and consequently obtaining conclusions for business management. The limitations of the investigation are then described and new research lines about various topics that came up during the project are proposed. Lastly, all of the references that were used during the project are listed in a final bibliography. Key Words: constant buyer, repurchase models, continuity of use of technology, e–commerce, B2C, technology acceptance, technology acceptance models, TAM, TPB, IDT, UTAUT, ECT, intention of repurchase, satisfaction, perceived trust/confidence, justice, feelings, the contradiction of expectations, quality, value, PLS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La dinámica estructural estudia la respuesta de una estructura ante cargas o fenómenos variables en el tiempo. En muchos casos, estos fenómenos requieren realizar análisis paramétricos de la estructura considerando una gran cantidad de configuraciones de diseño o modificaciones de la estructura. Estos cambios, ya sean en fases iniciales de diseño o en fases posteriores de rediseño, alteran las propiedades físicas de la estructura y por tanto del modelo empleado para su análisis, cuyo comportamiento dinámico se modifica en consecuencia. Un caso de estudio de este tipo de modificaciones es la supervisión de la integridad estructural, que trata de identificar la presencia de daño estructural y prever el comportamiento de la estructura tras ese daño, como puede ser la variación del comportamiento dinámico de la estructura debida a una delaminación, la aparición o crecimiento de grieta, la debida a la pérdida de pala sufrida por el motor de un avión en vuelo, o la respuesta dinámica de construcciones civiles como puentes o edificios frente a cargas sísmicas. Si a la complejidad de los análisis dinámicos requeridos en el caso de grandes estructuras se añade la variación de determinados parámetros en busca de una respuesta dinámica determinada o para simular la presencia de daños, resulta necesario la búsqueda de medios de simplificación o aceleración del conjunto de análisis que de otra forma parecen inabordables tanto desde el punto de vista del tiempo de computación, como de la capacidad requerida de almacenamiento y manejo de grandes volúmenes de archivos de datos. En la presente tesis doctoral se han revisado los métodos de reducción de elementos .nitos más habituales para análisis dinámicos de grandes estructuras. Se han comparado los resultados de casos de estudio de los métodos más aptos, para el tipo de estructuras y modificaciones descritas, con los resultados de aplicación de un método de reducción reciente. Entre los primeros están el método de condensación estática de Guyan extendido al caso con amortiguamiento no proporcional y posteriores implementaciones de condensaciones dinámicas en diferentes espacios vectoriales. El método de reducción recientemente presentado se denomina en esta tesis DACMAM (Dynamic Analysis in Complex Modal space Acceleration Method), y consiste en el análisis simplificado que proporciona una solución para la respuesta dinámica de una estructura, calculada en el espacio modal complejo y que admite modificaciones estructurales. El método DACMAM permite seleccionar un número reducido de grados de libertad significativos para la dinámica del fenómeno que se quiere estudiar como son los puntos de aplicación de la carga, localizaciones de los cambios estructurales o puntos donde se quiera conocer la respuesta, de forma que al implementar las modificaciones estructurales, se ejecutan los análisis necesarios sólo de dichos grados de libertad sin pérdida de precisión. El método permite considerar alteraciones de masa, rigidez, amortiguamiento y la adición de nuevos grados de libertad. Teniendo en cuenta la dimensión del conjunto de ecuaciones a resolver, la parametrización de los análisis no sólo resulta posible, sino que es también manejable y controlable gracias a la sencilla implementación del procedimiento para los códigos habituales de cálculo mediante elementos .nitos. En el presente trabajo se muestra la bondad y eficiencia del método en comparación con algunos de los métodos de reducción de grandes modelos estructurales, verificando las diferencias entre sí de los resultados obtenidos y respecto a la respuesta real de la estructura, y comprobando los medios empleados en ellos tanto en tiempo de ejecución como en tamaño de ficheros electrónicos. La influencia de los diversos factores que se tienen en cuenta permite identificar los límites y capacidades de aplicación del método y su exhaustiva comparación con los otros procedimientos. ABSTRACT Structural dynamics studies the response of a structure under loads or phenomena which vary over time. In many cases, these phenomena require the use of parametric analyses taking into consideration several design configurations or modifications of the structure. This is a typical need in an engineering o¢ ce, no matter the structural design is in early or final stages. These changes modify the physical properties of the structure, and therefore, the finite element model to analyse it. A case study, that exempli.es this circumstance, is the structural health monitoring to predict the variation of the dynamical behaviour after damage, such as a delaminated structure, a crack onset or growth, an aircraft that suffers a blade loss event or civil structures (buildings or bridges) under seismic loads. Not only large structures require complex analyses to appropriately acquire an accurate solution, but also the variation of certain parameters. There is a need to simplify the analytical process, in order to bring CPU time, data .les, management of solutions to a reasonable size. In the current doctoral thesis, the most common finite element reduction methods for large structures are reviewed. Results of case studies are compared between a recently proposed method, herein named DACMAM (Dynamic Analysis in Complex Modal space Acceleration Method), and different condensation methods, namely static or Guyan condensation and dynamic condensation in different vectorial spaces. All these methods are suitable for considering non-classical damping. The reduction method DACMAM consist of a structural modification in the complex modal domain which provides a dynamic response solution for the reduced models. This process allows the selection of a few degrees of freedom that are relevant for the dynamic response of the system. These d.o.f. are the load application points, relevant structural points or points in which it is important to know the response. Consequently, an analysis with structural modifications implies only the calculation of the dynamic response of the selected degrees of freedom added, but with no loss of information. Therefore, mass, stiffness or damping modifications are easily considered as well as new degrees of freedom. Taking into account the size of the equations to be solved, the parameterization of the dynamic solutions is not only possible, but also manageable and controllable due to the easy implementation of the procedure in the standard finite element solvers. In this thesis, the proposed reduction method for large structural models is compared with other published model order reduction methods. The comparison shows and underlines the efficiency of the new method, and veri.es the differences in the response when compared with the response of the full model. The CPU time, the data files and the scope of the parameterization are also addressed.