184 resultados para Bilinear pairings.
Resumo:
La frecuencia con la que se producen explosiones sobre edificios, ya sean accidentales o intencionadas, es reducida, pero sus efectos pueden ser catastróficos. Es deseable poder predecir de forma suficientemente precisa las consecuencias de estas acciones dinámicas sobre edificaciones civiles, entre las cuales las estructuras reticuladas de hormigón armado son una tipología habitual. En esta tesis doctoral se exploran distintas opciones prácticas para el modelado y cálculo numérico por ordenador de estructuras de hormigón armado sometidas a explosiones. Se emplean modelos numéricos de elementos finitos con integración explícita en el tiempo, que demuestran su capacidad efectiva para simular los fenómenos físicos y estructurales de dinámica rápida y altamente no lineales que suceden, pudiendo predecir los daños ocasionados tanto por la propia explosión como por el posible colapso progresivo de la estructura. El trabajo se ha llevado a cabo empleando el código comercial de elementos finitos LS-DYNA (Hallquist, 2006), desarrollando en el mismo distintos tipos de modelos de cálculo que se pueden clasificar en dos tipos principales: 1) modelos basados en elementos finitos de continuo, en los que se discretiza directamente el medio continuo mediante grados de libertad nodales de desplazamientos; 2) modelos basados en elementos finitos estructurales, mediante vigas y láminas, que incluyen hipótesis cinemáticas para elementos lineales o superficiales. Estos modelos se desarrollan y discuten a varios niveles distintos: 1) a nivel del comportamiento de los materiales, 2) a nivel de la respuesta de elementos estructurales tales como columnas, vigas o losas, y 3) a nivel de la respuesta de edificios completos o de partes significativas de los mismos. Se desarrollan modelos de elementos finitos de continuo 3D muy detallados que modelizan el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con un modelo constitutivo del hormigón CSCM (Murray et al., 2007), que tiene un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura. El acero se representa con un modelo constitutivo elastoplástico bilineal con rotura. Se modeliza la geometría precisa del hormigón mediante elementos finitos de continuo 3D y cada una de las barras de armado mediante elementos finitos tipo viga, con su posición exacta dentro de la masa de hormigón. La malla del modelo se construye mediante la superposición de los elementos de continuo de hormigón y los elementos tipo viga de las armaduras segregadas, que son obligadas a seguir la deformación del sólido en cada punto mediante un algoritmo de penalización, simulando así el comportamiento del hormigón armado. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF de continuo. Con estos modelos de EF de continuo se analiza la respuesta estructural de elementos constructivos (columnas, losas y pórticos) frente a acciones explosivas. Asimismo se han comparado con resultados experimentales, de ensayos sobre vigas y losas con distintas cargas de explosivo, verificándose una coincidencia aceptable y permitiendo una calibración de los parámetros de cálculo. Sin embargo estos modelos tan detallados no son recomendables para analizar edificios completos, ya que el elevado número de elementos finitos que serían necesarios eleva su coste computacional hasta hacerlos inviables para los recursos de cálculo actuales. Adicionalmente, se desarrollan modelos de elementos finitos estructurales (vigas y láminas) que, con un coste computacional reducido, son capaces de reproducir el comportamiento global de la estructura con una precisión similar. Se modelizan igualmente el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con el modelo constitutivo del hormigón EC2 (Hallquist et al., 2013), que también presenta un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura, y se usa en elementos finitos tipo lámina. El acero se representa de nuevo con un modelo constitutivo elastoplástico bilineal con rotura, usando elementos finitos tipo viga. Se modeliza una geometría equivalente del hormigón y del armado, y se tiene en cuenta la posición relativa del acero dentro de la masa de hormigón. Las mallas de ambos se unen mediante nodos comunes, produciendo una respuesta conjunta. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF estructurales. Con estos modelos de EF estructurales se simulan los mismos elementos constructivos que con los modelos de EF de continuo, y comparando sus respuestas estructurales frente a explosión se realiza la calibración de los primeros, de forma que se obtiene un comportamiento estructural similar con un coste computacional reducido. Se comprueba que estos mismos modelos, tanto los modelos de EF de continuo como los modelos de EF estructurales, son precisos también para el análisis del fenómeno de colapso progresivo en una estructura, y que se pueden utilizar para el estudio simultáneo de los daños de una explosión y el posterior colapso. Para ello se incluyen formulaciones que permiten considerar las fuerzas debidas al peso propio, sobrecargas y los contactos de unas partes de la estructura sobre otras. Se validan ambos modelos con un ensayo a escala real en el que un módulo con seis columnas y dos plantas colapsa al eliminar una de sus columnas. El coste computacional del modelo de EF de continuo para la simulación de este ensayo es mucho mayor que el del modelo de EF estructurales, lo cual hace inviable su aplicación en edificios completos, mientras que el modelo de EF estructurales presenta una respuesta global suficientemente precisa con un coste asumible. Por último se utilizan los modelos de EF estructurales para analizar explosiones sobre edificios de varias plantas, y se simulan dos escenarios con cargas explosivas para un edificio completo, con un coste computacional moderado. The frequency of explosions on buildings whether they are intended or accidental is small, but they can have catastrophic effects. Being able to predict in a accurate enough manner the consequences of these dynamic actions on civil buildings, among which frame-type reinforced concrete buildings are a frequent typology is desirable. In this doctoral thesis different practical options for the modeling and computer assisted numerical calculation of reinforced concrete structures submitted to explosions are explored. Numerical finite elements models with explicit time-based integration are employed, demonstrating their effective capacity in the simulation of the occurring fast dynamic and highly nonlinear physical and structural phenomena, allowing to predict the damage caused by the explosion itself as well as by the possible progressive collapse of the structure. The work has been carried out with the commercial finite elements code LS-DYNA (Hallquist, 2006), developing several types of calculation model classified in two main types: 1) Models based in continuum finite elements in which the continuous medium is discretized directly by means of nodal displacement degrees of freedom; 2) Models based on structural finite elements, with beams and shells, including kinematic hypothesis for linear and superficial elements. These models are developed and discussed at different levels: 1) material behaviour, 2) response of structural elements such as columns, beams and slabs, and 3) response of complete buildings or significative parts of them. Very detailed 3D continuum finite element models are developed, modeling mass concrete and reinforcement steel in a segregated manner. Concrete is represented with a constitutive concrete model CSCM (Murray et al., 2007), that has an inelastic behaviour, with different tension and compression response, hardening, cracking and compression damage and failure. The steel is represented with an elastic-plastic bilinear model with failure. The actual geometry of the concrete is modeled with 3D continuum finite elements and every and each of the reinforcing bars with beam-type finite elements, with their exact position in the concrete mass. The mesh of the model is generated by the superposition of the concrete continuum elements and the beam-type elements of the segregated reinforcement, which are made to follow the deformation of the solid in each point by means of a penalty algorithm, reproducing the behaviour of reinforced concrete. In this work these models will be called continuum FE models as a simplification. With these continuum FE models the response of construction elements (columns, slabs and frames) under explosive actions are analysed. They have also been compared with experimental results of tests on beams and slabs with various explosive charges, verifying an acceptable coincidence and allowing a calibration of the calculation parameters. These detailed models are however not advised for the analysis of complete buildings, as the high number of finite elements necessary raises its computational cost, making them unreliable for the current calculation resources. In addition to that, structural finite elements (beams and shells) models are developed, which, while having a reduced computational cost, are able to reproduce the global behaviour of the structure with a similar accuracy. Mass concrete and reinforcing steel are also modeled segregated. Concrete is represented with the concrete constitutive model EC2 (Hallquist et al., 2013), which also presents an inelastic behaviour, with a different tension and compression response, hardening, compression and cracking damage and failure, and is used in shell-type finite elements. Steel is represented once again with an elastic-plastic bilineal with failure constitutive model, using beam-type finite elements. An equivalent geometry of the concrete and the steel is modeled, considering the relative position of the steel inside the concrete mass. The meshes of both sets of elements are bound with common nodes, therefore producing a joint response. These models will be called structural FE models as a simplification. With these structural FE models the same construction elements as with the continuum FE models are simulated, and by comparing their response under explosive actions a calibration of the former is carried out, resulting in a similar response with a reduced computational cost. It is verified that both the continuum FE models and the structural FE models are also accurate for the analysis of the phenomenon of progressive collapse of a structure, and that they can be employed for the simultaneous study of an explosion damage and the resulting collapse. Both models are validated with an experimental full-scale test in which a six column, two floors module collapses after the removal of one of its columns. The computational cost of the continuum FE model for the simulation of this test is a lot higher than that of the structural FE model, making it non-viable for its application to full buildings, while the structural FE model presents a global response accurate enough with an admissible cost. Finally, structural FE models are used to analyze explosions on several story buildings, and two scenarios are simulated with explosive charges for a full building, with a moderate computational cost.
Resumo:
En la actualidad, y en consonancia con la tendencia de “sostenibilidad” extendida a todos los campos y parcelas de la ciencia, nos encontramos con un área de estudio basado en la problemática del inevitable deterioro de las estructuras existentes, y la gestión de las acciones a realizar para mantener las condiciones de servicio de los puentes y prolongar su vida útil. Tal y como se comienza a ver en las inversiones en los países avanzados, con una larga tradición en el desarrollo de sus infraestructuras, se muestra claramente el nuevo marco al que nos dirigimos. Las nuevas tendencias van encaminadas cada vez más a la conservación y mantenimiento, reduciéndose las partidas presupuestarias destinadas a nuevas actuaciones, debido a la completa vertebración territorial que se ha ido instaurando en estos países, entre los que España se encuentra. Este nutrido patrimonio de infraestructuras viarias, que cuentan a su vez con un importante número de estructuras, hacen necesarias las labores de gestión y mantenimiento de los puentes integrantes en las mismas. Bajo estas premisas, la tesis aborda el estado de desarrollo de la implementación de los sistemas de gestión de puentes, las tendencias actuales e identificación de campos por desarrollar, así como la aplicación específica a redes de carreteras de escasos recursos, más allá de la Red Estatal. Además de analizar las diversas metodologías de formación de inventarios, realización de inspecciones y evaluación del estado de puentes, se ha enfocado, como principal objetivo, el desarrollo de un sistema específico de predicción del deterioro y ayuda a la toma de decisiones. Este sistema, adicionalmente a la configuración tradicional de criterios de formación de bases de datos de estructuras e inspecciones, plantea, de forma justificada, la clasificación relativa al conjunto de la red gestionada, según su estado de condición. Eso permite, mediante técnicas de optimización, la correcta toma de decisiones a los técnicos encargados de la gestión de la red. Dentro de los diversos métodos de evaluación de la predicción de evolución del deterioro de cada puente, se plantea la utilización de un método bilineal simplificado envolvente del ajuste empírico realizado y de los modelos markovianos como la solución más efectiva para abordar el análisis de la predicción de la propagación del daño. Todo ello explotando la campaña experimenta realizada que, a partir de una serie de “fotografías técnicas” del estado de la red de puentes gestionados obtenidas mediante las inspecciones realizadas, es capaz de mejorar el proceso habitual de toma de decisiones. Toda la base teórica reflejada en el documento, se ve complementada mediante la implementación de un Sistema de Gestión de Puentes (SGP) específico, adaptado según las necesidades y limitaciones de la administración a la que se ha aplicado, en concreto, la Dirección General de Carreteras de la Junta de Comunidades de Castilla-La Mancha, para una muestra representativa del conjunto de puentes de la red de la provincia de Albacete, partiendo de una situación en la que no existe, actualmente, un sistema formal de gestión de puentes. Tras un meditado análisis del estado del arte dentro de los Capítulos 2 y 3, se plantea un modelo de predicción del deterioro dentro del Capítulo 4 “Modelo de Predicción del Deterioro”. De la misma manera, para la resolución del problema de optimización, se justifica la utilización de un novedoso sistema de optimización secuencial elegido dentro del Capítulo 5, los “Algoritmos Evolutivos”, en sus diferentes variantes, como la herramienta matemática más correcta para distribuir adecuadamente los recursos económicos dedicados a mantenimiento y conservación de los que esta administración pueda disponer en sus partidas de presupuesto a medio plazo. En el Capítulo 6, y en diversos Anexos al presente documento, se muestran los datos y resultados obtenidos de la aplicación específica desarrollada para la red local analizada, utilizando el modelo de deterioro y optimización secuencial, que garantiza la correcta asignación de los escasos recursos de los que disponen las redes autonómicas en España. Se plantea con especial interés la implantación de estos sistemas en la red secundaria española, debido a que reciben en los últimos tiempos una mayor responsabilidad de gestión, con recursos cada vez más limitados. Finalmente, en el Capítulo 7, se plantean una serie de conclusiones que nos hacen reflexionar de la necesidad de comenzar a pasar, en materia de gestión de infraestructuras, de los estudios teóricos y los congresos, hacia la aplicación y la práctica, con un planteamiento que nos debe llevar a cambios importantes en la forma de concebir la labor del ingeniero y las enseñanzas que se imparten en las escuelas. También se enumeran las aportaciones originales que plantea el documento frente al actual estado del arte. Se plantean, de la misma manera, las líneas de investigación en materia de Sistemas de Gestión de Puentes que pueden ayudar a refinar y mejorar los actuales sistemas utilizados. In line with the development of "sustainability" extended to all fields of science, we are faced with the inevitable and ongoing deterioration of existing structures, leading nowadays to the necessary management of maintaining the service conditions and life time extension of bridges. As per the increased amounts of money that can be observed being spent in the countries with an extensive and strong tradition in the development of their infrastructure, the trend can be clearly recognized. The new tendencies turn more and more towards conservation and maintenance, reducing programmed expenses for new construction activities, in line with the already wellestablished territorial structures, as is the case for Spain. This significant heritage of established road infrastructure, consequently containing a vast number of structures, imminently lead to necessary management and maintenance of the including bridges. Under these conditions, this thesis focusses on the status of the development of the management implementation for bridges, current trends, and identifying areas for further development. This also includes the specific application to road networks with limited resources, beyond the national highways. In addition to analyzing the various training methodologies, inventory inspections and condition assessments of bridges, the main objective has been the development of a specific methodology. This methodology, in addition to the traditional system of structure and inspection database training criteria, sustains the classification for the entire road network, according to their condition. This allows, through optimization techniques, for the correct decision making by the technical managers of the network. Among the various methods for assessing the evolution forecast of deterioration of each bridge, a simplified bilinear envelope adjustment made empirical method and Markov models as the most effective solution to address the analysis of predicting the spread of damage, arising from a "technical snapshot" obtained through inspections of the condition of the bridges included in the investigated network. All theoretical basis reflected in the document, is completed by implementing a specific Bridges Management System (BMS), adapted according to the needs and limitations of the authorities for which it has been applied, being in this case particularly the General Highways Directorate of the autonomous region of Castilla-La Mancha, for a representative sample of all bridges in the network in the province of Albacete, starting from a situation where there is currently no formal bridge management system. After an analysis of the state of the art in Chapters 2 and 3, a new deterioration prediction model is developed in Chapter 4, "Deterioration Prediction Model". In the same way, to solve the optimization problem is proposed the use of a singular system of sequential optimization elected under Chapter 5, the "Evolutionary Algorithms", the most suitable mathematical tool to adequately distribute the economic resources for maintenance and conservation for mid-term budget planning. In Chapter 6, and in the various appendices, data and results are presented of the developed application for the analyzed local network, from the optimization model, which guarantees the correct allocation of scarce resources at the disposal of authorities responsible for the regional networks in Spain. The implementation of these systems is witnessed with particular interest for the Spanish secondary network, because of the increasing management responsibility, with decreasing resources. Chapter 7 presents a series of conclusions that triggers to reconsider shifting from theoretical studies and conferences towards a practical implementation, considering how to properly conceive the engineering input and the related education. The original contributions of the document are also listed. In the same way, the research on the Bridges Management System can help evaluating and improving the used systematics.
Resumo:
Praying mantids use binocular cues to judge whether their prey is in striking distance. When there are several moving targets within their binocular visual field, mantids need to solve the correspondence problem. They must select between the possible pairings of retinal images in the two eyes so that they can strike at a single real target. In this study, mantids were presented with two targets in various configurations, and the resulting fixating saccades that precede the strike were analyzed. The distributions of saccades show that mantids consistently prefer one out of several possible matches. Selection is in part guided by the position and the spatiotemporal features of the target image in each eye. Selection also depends upon the binocular disparity of the images, suggesting that insects can perform local binocular computations. The pairing rules ensure that mantids tend to aim at real targets and not at “ghost” targets arising from false matches.
Resumo:
Several regulators of G protein signaling (RGS) proteins contain a G protein γ-subunit-like (GGL) domain, which, as we have shown, binds to Gβ5 subunits. Here, we extend our original findings by describing another GGL-domain-containing RGS, human RGS6. When RGS6 is coexpressed with different Gβ subunits, only RGS6 and Gβ5 interact. The expression of mRNA for RGS6 and Gβ5 in human tissues overlaps. Predictions of α-helical and coiled-coil character within GGL domains, coupled with measurements of Gβ binding by GGL domain mutants, support the contention that Gγ-like regions within RGS proteins interact with Gβ5 subunits in a fashion comparable to conventional Gβ/Gγ pairings. Mutation of the highly conserved Phe-61 residue of Gγ2 to tryptophan, the residue present in all GGL domains, increases the stability of the Gβ5/Gγ2 heterodimer, highlighting the importance of this residue to GGL/Gβ5 association.
Resumo:
We present new methods for identifying and analyzing statistically significant residue clusters that occur in three-dimensional (3D) protein structures. Residue clusters of different kinds occur in many contexts. They often feature the active site (e.g., in substrate binding), the interface between polypeptide units of protein complexes, regions of protein-protein and protein-nucleic acid interactions, or regions of metal ion coordination. The methods are illustrated with 3D clusters centering on four themes. (i) Acidic or histidine-acidic clusters associated with metal ions. (ii) Cysteine clusters including coordination of metals such as zinc or iron-sulfur structures, cysteine knots prominent in growth factors, multiple sets of buried disulfide pairings that putatively nucleate the hydrophobic core, or cysteine clusters of mostly exposed disulfide bridges. (iii) Iron-sulfur proteins and charge clusters. (iv) 3D environments of multiple histidine residues. Study of diverse 3D residue clusters offers a new perspective on protein structure and function. The algorithms can aid in rapid identification of distinctive sites, suggest correlations among protein structures, and serve as a tool in the analysis of new structures.
Resumo:
Structurally neighboring residues are categorized according to their separation in the primary sequence as proximal (1-4 positions apart) and otherwise distal, which in turn is divided into near (5-20 positions), far (21-50 positions), very far ( > 50 positions), and interchain (from different chains of the same structure). These categories describe the linear distance histogram (LDH) for three-dimensional neighboring residue types. Among the main results are the following: (i) nearest-neighbor hydrophobic residues tend to be increasingly distally separated in the linear sequence, thus most often connecting distinct secondary structure units. (ii) The LDHs of oppositely charged nearest-neighbors emphasize proximal positions with a subsidiary maximum for very far positions. (iii) Cysteine-cysteine structural interactions rarely involve proximal positions. (iv) The greatest numbers of interchain specific nearest-neighbors in protein structures are composed of oppositely charged residues. (v) The largest fraction of side-chain neighboring residues from beta-strands involves near positions, emphasizing associations between consecutive strands. (vi) Exposed residue pairs are predominantly located in proximal linear positions, while buried residue pairs principally correspond to far or very far distal positions. The results are principally invariant to protein sizes, amino acid usages, linear distance normalizations, and over- and underrepresentations among nearest-neighbor types. Interpretations and hypotheses concerning the LDHs, particularly those of hydrophobic and charged pairings, are discussed with respect to protein stability and functionality. The pronounced occurrence of oppositely charged interchain contacts is consistent with many observations on protein complexes where multichain stabilization is facilitated by electrostatic interactions.
Resumo:
Purpose. Mice rendered hypoglycemic by a null mutation in the glucagon receptor gene Gcgr display late-onset retinal degeneration and loss of retinal sensitivity. Acute hyperglycemia induced by dextrose ingestion does not restore their retinal function, which is consistent with irreversible loss of vision. The goal of this study was to establish whether long-term administration of high dietary glucose rescues retinal function and circuit connectivity in aged Gcgr−/− mice. Methods. Gcgr−/− mice were administered a carbohydrate-rich diet starting at 12 months of age. After 1 month of treatment, retinal function and structure were evaluated using electroretinographic (ERG) recordings and immunohistochemistry. Results. Treatment with a carbohydrate-rich diet raised blood glucose levels and improved retinal function in Gcgr−/− mice. Blood glucose increased from moderate hypoglycemia to euglycemic levels, whereas ERG b-wave sensitivity improved approximately 10-fold. Because the b-wave reflects the electrical activity of second-order cells, we examined for changes in rod-to-bipolar cell synapses. Gcgr−/− retinas have 20% fewer synaptic pairings than Gcgr+/− retinas. Remarkably, most of the lost synapses were located farthest from the bipolar cell body, near the distal boundary of the outer plexiform layer (OPL), suggesting that apical synapses are most vulnerable to chronic hypoglycemia. Although treatment with the carbohydrate-rich diet restored retinal function, it did not restore these synaptic contacts. Conclusions. Prolonged exposure to diet-induced euglycemia improves retinal function but does not reestablish synaptic contacts lost by chronic hypoglycemia. These results suggest that retinal neurons have a homeostatic mechanism that integrates energetic status over prolonged periods of time and allows them to recover functionality despite synaptic loss.
Resumo:
La Criptografía Basada en la Identidad hace uso de curvas elípticas que satisfacen ciertas condiciones (pairingfriendly curves), en particular, el grado de inmersión de dichas curvas debe ser pequeño. En este trabajo se obtienen familias explicitas de curvas elípticas idóneas para este escenario. Dicha criptografía está basada en el cálculo de emparejamientos sobre curvas, cálculo factible gracias al algoritmo de Miller. Proponemos una versión más eficiente que la clásica de este algoritmo usando la representación de un número en forma no adyacente (NAF).
Resumo:
Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.
Resumo:
Hydrophobins are small (similar to 100 aa) proteins that have an important role in the growth and development of mycelial fungi. They are surface active and, after secretion by the fungi, self-assemble into amphipathic membranes at hydrophobic/hydrophilic interfaces, reversing the hydrophobicity of the surface. In this study, molecular dynamics simulation techniques have been used to model the process by which a specific class I hydrophobin, SC3, binds to a range of hydrophobic/ hydrophilic interfaces. The structure of SC3 used in this investigation was modeled based on the crystal structure of the class II hydrophobin HFBII using the assumption that the disulfide pairings of the eight conserved cysteine residues are maintained. The proposed model for SC3 in aqueous solution is compact and globular containing primarily P-strand and coil structures. The behavior of this model of SC3 was investigated at an air/water, an oil/water, and a hydrophobic solid/water interface. It was found that SC3 preferentially binds to the interfaces via the loop region between the third and fourth cysteine residues and that binding is associated with an increase in a-helix formation in qualitative agreement with experiment. Based on a combination of the available experiment data and the current simulation studies, we propose a possible model for SC3 self-assembly on a hydrophobic solid/water interface.
Resumo:
Corporate sponsorship of events contributes significantly to marketing aims, including brand awareness as measured by recall and recognition of sponsor-event pairings. Unfortunately, resultant advantages accrue disproportionately to brands having a natural or congruent fit with the available sponsorship properties. In three cued-recall experiments, the effect of articulation of sponsorship fit on memory for sponsor-event pairings is examined. While congruent sponsors have a natural memory advantage, results demonstrate that memory improvements via articulation are possible for incongruent sponsor-event pairings. These improvements are, however, affected by the presence of competitor brands and the way in which memory is accessed.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
Over the full visual field, contrast sensitivity is fairly well described by a linear decline in log sensitivity as a function of eccentricity (expressed in grating cycles). However, many psychophysical studies of spatial visual function concentrate on the central ±4.5 deg (or so) of the visual field. As the details of the variation in sensitivity have not been well documented in this region we did so for small patches of target contrast at several spatial frequencies (0.7–4 c/deg), meridians (horizontal, vertical, and oblique), orientations (horizontal, vertical, and oblique), and eccentricities (0–18 cycles). To reduce the potential effects of stimulus uncertainty, circular markers surrounded the targets. Our analysis shows that the decline in binocular log sensitivity within the central visual field is bilinear: The initial decline is steep, whereas the later decline is shallow and much closer to the classical results. The bilinear decline was approximately symmetrical in the horizontal meridian and declined most steeply in the superior visual field. Further analyses showed our results to be scale-invariant and that this property could not be predicted from cone densities. We used the results from the cardinal meridians to radially interpolate an attenuation surface with the shape of a witch's hat that provided good predictions for the results from the oblique meridians. The witch's hat provides a convenient starting point from which to build models of contrast sensitivity, including those designed to investigate signal summation and neuronal convergence of the image contrast signal. Finally, we provide Matlab code for constructing the witch's hat.
Resumo:
The research presented in this paper is part of an ongoing investigation into how best to incorporate speech-based input within mobile data collection applications. In our previous work [1], we evaluated the ability of a single speech recognition engine to support accurate, mobile, speech-based data input. Here, we build on our previous research to compare the achievable speaker-independent accuracy rates of a variety of speech recognition engines; we also consider the relative effectiveness of different speech recognition engine and microphone pairings in terms of their ability to support accurate text entry under realistic mobile conditions of use. Our intent is to provide some initial empirical data derived from mobile, user-based evaluations to support technological decisions faced by developers of mobile applications that would benefit from, or require, speech-based data entry facilities.
Resumo:
The visual system dissects the retinal image into millions of local analyses along numerous visual dimensions. However, our perceptions of the world are not fragmentary, so further processes must be involved in stitching it all back together. Simply summing up the responses would not work because this would convey an increase in image contrast with an increase in the number of mechanisms stimulated. Here, we consider a generic model of signal combination and counter-suppression designed to address this problem. The model is derived and tested for simple stimulus pairings (e.g. A + B), but is readily extended over multiple analysers. The model can account for nonlinear contrast transduction, dilution masking, and signal combination at threshold and above. It also predicts nonmonotonic psychometric functions where sensitivity to signal A in the presence of pedestal B first declines with increasing signal strength (paradoxically dropping below 50% correct in two-interval forced choice), but then rises back up again, producing a contour that follows the wings and neck of a swan. We looked for and found these "swan" functions in four different stimulus dimensions (ocularity, space, orientation, and time), providing some support for our proposal.