930 resultados para Spatial Query Processing And Optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Durante los últimos años la tendencia en el sector de las telecomunicaciones ha sido un aumento y diversificación en la transmisión de voz, video y fundamentalmente de datos. Para conseguir alcanzar las tasas de transmisión requeridas, los nuevos estándares de comunicaciones requieren un mayor ancho de banda y tienen un mayor factor de pico, lo cual influye en el bajo rendimiento del amplificador de radiofrecuencia (RFPA). Otro factor que ha influido en el bajo rendimiento es el diseño del amplificador de radiofrecuencia. Tradicionalmente se han utilizado amplificadores lineales por su buen funcionamiento. Sin embargo, debido al elevado factor de pico de las señales transmitidas, el rendimiento de este tipo de amplificadores es bajo. El bajo rendimiento del sistema conlleva desventajas adicionales como el aumento del coste y del tamaño del sistema de refrigeración, como en el caso de una estación base, o como la reducción del tiempo de uso y un mayor calentamiento del equipo para sistemas portátiles alimentados con baterías. Debido a estos factores, se han desarrollado durante las últimas décadas varias soluciones para aumentar el rendimiento del RFPA como la técnica de Outphasing, combinadores de potencia o la técnica de Doherty. Estas soluciones mejoran las prestaciones del RFPA y en algún caso han sido ampliamente utilizados comercialmente como la técnica de Doherty, que alcanza rendimientos hasta del 50% para el sistema completo para anchos de banda de hasta 20MHz. Pese a las mejoras obtenidas con estas soluciones, los mayores rendimientos del sistema se obtienen para soluciones basadas en la modulación de la tensión de alimentación del amplificador de potencia como “Envelope Tracking” o “EER”. La técnica de seguimiento de envolvente o “Envelope Tracking” está basada en la modulación de la tensión de alimentación de un amplificador lineal de potencia para obtener una mejora en el rendimiento en el sistema comparado a una solución con una tensión de alimentación constante. Para la implementación de esta técnica se necesita una etapa adicional, el amplificador de envolvente, que añade complejidad al amplificador de radiofrecuencia. En un amplificador diseñado con esta técnica, se aumentan las pérdidas debido a la etapa adicional que supone el amplificador de envolvente pero a su vez disminuyen las pérdidas en el amplificador de potencia. Si el diseño se optimiza adecuadamente, puede conseguirse un aumento global en el rendimiento del sistema superior al conseguido con las técnicas mencionadas anteriormente. Esta técnica presenta ventajas en el diseño del amplificador de envolvente, ya que el ancho de banda requerido puede ser menor que el ancho de banda de la señal de envolvente si se optimiza adecuadamente el diseño. Adicionalmente, debido a que la sincronización entre la señal de envolvente y de fase no tiene que ser perfecta, el proceso de integración conlleva ciertas ventajas respecto a otras técnicas como EER. La técnica de eliminación y restauración de envolvente, llamada EER o técnica de Kahn está basada en modulación simultánea de la envolvente y la fase de la señal usando un amplificador de potencia conmutado, no lineal y que permite obtener un elevado rendimiento. Esta solución fue propuesta en el año 1952, pero no ha sido implementada con éxito durante muchos años debido a los exigentes requerimientos en cuanto a la sincronización entre fase y envolvente, a las técnicas de control y de corrección de los errores y no linealidades de cada una de las etapas así como de los equipos para poder implementar estas técnicas, que tienen unos requerimientos exigentes en capacidad de cálculo y procesamiento. Dentro del diseño de un RFPA, el amplificador de envolvente tiene una gran importancia debido a su influencia en el rendimiento y ancho de banda del sistema completo. Adicionalmente, la linealidad y la calidad de la señal de transmitida deben ser elevados para poder cumplir con los diferentes estándares de telecomunicaciones. Esta tesis se centra en el amplificador de envolvente y el objetivo principal es el desarrollo de soluciones que permitan el aumento del rendimiento total del sistema a la vez que satisfagan los requerimientos de ancho de banda, calidad de la señal transmitida y de linealidad. Debido al elevado rendimiento que potencialmente puede alcanzarse con la técnica de EER, esta técnica ha sido objeto de análisis y en el estado del arte pueden encontrarse numerosas referencias que analizan el diseño y proponen diversas implementaciones. En una clasificación de alto nivel, podemos agrupar las soluciones propuestas del amplificador de envolvente según estén compuestas de una o múltiples etapas. Las soluciones para el amplificador de envolvente en una configuración multietapa se basan en la combinación de un convertidor conmutado, de elevado rendimiento con un regulador lineal, de alto ancho de banda, en una combinación serie o paralelo. Estas soluciones, debido a la combinación de las características de ambas etapas, proporcionan un buen compromiso entre rendimiento y buen funcionamiento del amplificador de RF. Por otro lado, la complejidad del sistema aumenta debido al mayor número de componentes y de señales de control necesarias y el aumento de rendimiento que se consigue con estas soluciones es limitado. Una configuración en una etapa tiene las ventajas de una mayor simplicidad, pero debido al elevado ancho de banda necesario, la frecuencia de conmutación debe aumentarse en gran medida. Esto implicará un bajo rendimiento y un peor funcionamiento del amplificador de envolvente. En el estado del arte pueden encontrarse diversas soluciones para un amplificador de envolvente en una etapa, como aumentar la frecuencia de conmutación y realizar la implementación en un circuito integrado, que tendrá mejor funcionamiento a altas frecuencias o utilizar técnicas topológicas y/o filtros de orden elevado, que permiten una reducción de la frecuencia de conmutación. En esta tesis se propone de manera original el uso de la técnica de cancelación de rizado, aplicado al convertidor reductor síncrono, para reducir la frecuencia de conmutación comparado con diseño equivalente del convertidor reductor convencional. Adicionalmente se han desarrollado dos variantes topológicas basadas en esta solución para aumentar la robustez y las prestaciones de la misma. Otro punto de interés en el diseño de un RFPA es la dificultad de poder estimar la influencia de los parámetros de diseño del amplificador de envolvente en el amplificador final integrado. En esta tesis se ha abordado este problema y se ha desarrollado una herramienta de diseño que permite obtener las principales figuras de mérito del amplificador integrado para la técnica de EER a partir del diseño del amplificador de envolvente. Mediante el uso de esta herramienta pueden validarse el efecto del ancho de banda, el rizado de tensión de salida o las no linealidades del diseño del amplificador de envolvente para varias modulaciones digitales. Las principales contribuciones originales de esta tesis son las siguientes: La aplicación de la técnica de cancelación de rizado a un convertidor reductor síncrono para un amplificador de envolvente de alto rendimiento para un RFPA linealizado mediante la técnica de EER. Una reducción del 66% en la frecuencia de conmutación, comparado con el reductor convencional equivalente. Esta reducción se ha validado experimentalmente obteniéndose una mejora en el rendimiento de entre el 12.4% y el 16% para las especificaciones de este trabajo. La topología y el diseño del convertidor reductor con dos redes de cancelación de rizado en cascada para mejorar el funcionamiento y robustez de la solución con una red de cancelación. La combinación de un convertidor redactor multifase con la técnica de cancelación de rizado para obtener una topología que proporciona una reducción del cociente entre frecuencia de conmutación y ancho de banda de la señal. El proceso de optimización del control del amplificador de envolvente en lazo cerrado para mejorar el funcionamiento respecto a la solución en lazo abierto del convertidor reductor con red de cancelación de rizado. Una herramienta de simulación para optimizar el proceso de diseño del amplificador de envolvente mediante la estimación de las figuras de mérito del RFPA, implementado mediante EER, basada en el diseño del amplificador de envolvente. La integración y caracterización del amplificador de envolvente basado en un convertidor reductor con red de cancelación de rizado en el transmisor de radiofrecuencia completo consiguiendo un elevado rendimiento, entre 57% y 70.6% para potencias de salida de 14.4W y 40.7W respectivamente. Esta tesis se divide en seis capítulos. El primer capítulo aborda la introducción enfocada en la aplicación, los amplificadores de potencia de radiofrecuencia, así como los principales problemas, retos y soluciones existentes. En el capítulo dos se desarrolla el estado del arte de amplificadores de potencia de RF, describiéndose las principales técnicas de diseño, las causas de no linealidad y las técnicas de optimización. El capítulo tres está centrado en las soluciones propuestas para el amplificador de envolvente. El modo de control se ha abordado en este capítulo y se ha presentado una optimización del diseño en lazo cerrado para el convertidor reductor convencional y para el convertidor reductor con red de cancelación de rizado. El capítulo cuatro se centra en el proceso de diseño del amplificador de envolvente. Se ha desarrollado una herramienta de diseño para evaluar la influencia del amplificador de envolvente en las figuras de mérito del RFPA. En el capítulo cinco se presenta el proceso de integración realizado y las pruebas realizadas para las diversas modulaciones, así como la completa caracterización y análisis del amplificador de RF. El capítulo seis describe las principales conclusiones de la tesis y las líneas futuras. ABSTRACT The trend in the telecommunications sector during the last years follow a high increase in the transmission rate of voice, video and mainly in data. To achieve the required levels of data rates, the new modulation standards demand higher bandwidths and have a higher peak to average power ratio (PAPR). These specifications have a direct impact in the low efficiency of the RFPA. An additional factor for the low efficiency of the RFPA is in the power amplifier design. Traditionally, linear classes have been used for the implementation of the power amplifier as they comply with the technical requirements. However, they have a low efficiency, especially in the operating range of signals with a high PAPR. The low efficiency of the transmitter has additional disadvantages as an increase in the cost and size as the cooling system needs to be increased for a base station and a temperature increase and a lower use time for portable devices. Several solutions have been proposed in the state of the art to improve the efficiency of the transmitter as Outphasing, power combiners or Doherty technique. However, the highest potential of efficiency improvement can be obtained using a modulated power supply for the power amplifier, as in the Envelope Tracking and EER techniques. The Envelope Tracking technique is based on the modulation of the power supply of a linear power amplifier to improve the overall efficiency compared to a fixed voltage supply. In the implementation of this technique an additional stage is needed, the envelope amplifier, that will increase the complexity of the RFPA. However, the efficiency of the linear power amplifier will increase and, if designed properly, the RFPA efficiency will be improved. The advantages of this technique are that the envelope amplifier design does not require such a high bandwidth as the envelope signal and that in the integration process a perfect synchronization between envelope and phase is not required. The Envelope Elimination and Restoration (EER) technique, known also as Kahn’s technique, is based on the simultaneous modulation of envelope and phase using a high efficiency switched power amplifier. This solution has the highest potential in terms of the efficiency improvement but also has the most challenging specifications. This solution, proposed in 1952, has not been successfully implemented until the last two decades due to the high demanding requirements for each of the stages as well as for the highly demanding processing and computation capabilities needed. At the system level, a very precise synchronization is required between the envelope and phase paths to avoid a linearity decrease of the system. Several techniques are used to compensate the non-linear effects in amplitude and phase and to improve the rejection of the out of band noise as predistortion, feedback and feed-forward. In order to obtain a high bandwidth and efficient RFPA using either ET or EER, the envelope amplifier stage will have a critical importance. The requirements for this stage are very demanding in terms of bandwidth, linearity and quality of the transmitted signal. Additionally the efficiency should be as high as possible, as the envelope amplifier has a direct impact in the efficiency of the overall system. This thesis is focused on the envelope amplifier stage and the main objective will be the development of high efficiency envelope amplifier solutions that comply with the requirements of the RFPA application. The design and optimization of an envelope amplifier for a RFPA application is a highly referenced research topic, and many solutions that address the envelope amplifier and the RFPA design and optimization can be found in the state of the art. From a high level classification, multiple and single stage envelope amplifiers can be identified. Envelope amplifiers for EER based on multiple stage architecture combine a linear assisted stage and a switched-mode stage, either in a series or parallel configuration, to achieve a very high performance RFPA. However, the complexity of the system increases and the efficiency improvement is limited. A single-stage envelope amplifier has the advantage of a lower complexity but in order to achieve the required bandwidth the switching frequency has to be highly increased, and therefore the performance and the efficiency are degraded. Several techniques are used to overcome this limitation, as the design of integrated circuits that are capable of switching at very high rates or the use of topological solutions, high order filters or a combination of both to reduce the switching frequency requirements. In this thesis it is originally proposed the use of the ripple cancellation technique, applied to a synchronous buck converter, to reduce the switching frequency requirements compared to a conventional buck converter for an envelope amplifier application. Three original proposals for the envelope amplifier stage, based on the ripple cancellation technique, are presented and one of the solutions has been experimentally validated and integrated in the complete amplifier, showing a high total efficiency increase compared to other solutions of the state of the art. Additionally, the proposed envelope amplifier has been integrated in the complete RFPA achieving a high total efficiency. The design process optimization has also been analyzed in this thesis. Due to the different figures of merit between the envelope amplifier and the complete RFPA it is very difficult to obtain an optimized design for the envelope amplifier. To reduce the design uncertainties, a design tool has been developed to provide an estimation of the RFPA figures of merit based on the design of the envelope amplifier. The main contributions of this thesis are: The application of the ripple cancellation technique to a synchronous buck converter for an envelope amplifier application to achieve a high efficiency and high bandwidth EER RFPA. A 66% reduction of the switching frequency, validated experimentally, compared to the equivalent conventional buck converter. This reduction has been reflected in an improvement in the efficiency between 12.4% and 16%, validated for the specifications of this work. The synchronous buck converter with two cascaded ripple cancellation networks (RCNs) topology and design to improve the robustness and the performance of the envelope amplifier. The combination of a phase-shifted multi-phase buck converter with the ripple cancellation technique to improve the envelope amplifier switching frequency to signal bandwidth ratio. The optimization of the control loop of an envelope amplifier to improve the performance of the open loop design for the conventional and ripple cancellation buck converter. A simulation tool to optimize the envelope amplifier design process. Using the envelope amplifier design as the input data, the main figures of merit of the complete RFPA for an EER application are obtained for several digital modulations. The successful integration of the envelope amplifier based on a RCN buck converter in the complete RFPA obtaining a high efficiency integrated amplifier. The efficiency obtained is between 57% and 70.6% for an output power of 14.4W and 40.7W respectively. The main figures of merit for the different modulations have been characterized and analyzed. This thesis is organized in six chapters. In Chapter 1 is provided an introduction of the RFPA application, where the main problems, challenges and solutions are described. In Chapter 2 the technical background for radiofrequency power amplifiers (RF) is presented. The main techniques to implement an RFPA are described and analyzed. The state of the art techniques to improve performance of the RFPA are identified as well as the main sources of no-linearities for the RFPA. Chapter 3 is focused on the envelope amplifier stage. The three different solutions proposed originally in this thesis for the envelope amplifier are presented and analyzed. The control stage design is analyzed and an optimization is proposed both for the conventional and the RCN buck converter. Chapter 4 is focused in the design and optimization process of the envelope amplifier and a design tool to evaluate the envelope amplifier design impact in the RFPA is presented. Chapter 5 shows the integration process of the complete amplifier. Chapter 6 addresses the main conclusions of the thesis and the future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study explores the relationship between attentional processing mediated by visual magnocellular (MC) processing and reading ability. Reading ability in a group of primary school children was compared to performance on a visual cued coherent motion detection task. The results showed that a brief spatial cue was more effective in drawing attention either away or towards a visual target in the group of readers ranked in the upper 25% of the sample compared to lower ranked readers. Regression analysis showed a significant relationship between attentional processing and reading when the effects of age and intellectual ability were removed. Results suggested a stronger relationship between visual attentional and non-word reading compared to irregular word reading. (C) 2004 Lippincott Williams & Wilkins, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attention defines our mental ability to select and respond to stimuli, internal or external, on the basis of behavioural goals in the presence of competing, behaviourally irrelevant, stimuli. The frontal and parietal cortices are generally agreed to be involved with attentional processing, in what is termed the 'fronto-parietal' network. The left parietal cortex has been seen as the site for temporal attentional processing, whereas the right parietal cortex has been seen as the site for spatial attentional processing. There is much debate about when the modulation of the primary visual cortex occurs, whether it is modulated in the feedforward sweep of processing or modulated by feedback projections from extrastriate and higher cortical areas. MEG and psychophysical measurements were used to look at spatially selective covert attention. Dual-task and cue-based paradigms were used. It was found that the posterior parietal cortex (PPC), in particular the SPL and IPL, was the main site of activation during these experiments, and that the left parietal lobe was activated more strongly than the right parietal lobe throughout. The levels of activation in both parietal and occipital areas were modulated in accordance with attentional demands. It is likely that spatially selective covert attention is dominated by the left parietal lobe, and that this takes the form of the proposed sensory-perceptual lateralization within the parietal lobes. Another form of lateralization is proposed, termed the motor-processing lateralization, the side of dominance being determined by handedness, being reversed in left- relative to right-handers. In terms of the modulation of the primary visual cortex, it was found that it is unlikely that V1 is modulated initially; rather the modulation takes the form of feedback from higher extrastriate and parietal areas. This fits with the idea of preattentive visual processing, a commonly accepted idea which, in itself, prevents the concept of initial modulation of V1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bandswere analyzed in pre-selected time windows of 350-550 and 500-700ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700ms for the phonological task and 350-550ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550ms for the phonological task and 500-700ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains. © 2012 McNab, Hillebrand, Swithenby and Rippon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern geographical databases, which are at the core of geographic information systems (GIS), store a rich set of aspatial attributes in addition to geographic data. Typically, aspatial information comes in textual and numeric format. Retrieving information constrained on spatial and aspatial data from geodatabases provides GIS users the ability to perform more interesting spatial analyses, and for applications to support composite location-aware searches; for example, in a real estate database: “Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000”. Efficient processing of such queries require combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by nonspatial filter, or viceversa), which can incur large performance overheads. On the other hand, more recently, the amount of geolocation data has grown rapidly in databases due in part to advances in geolocation technologies (e.g., GPS-enabled smartphones) that allow users to associate location data to objects or events. The latter poses potential data ingestion challenges of large data volumes for practical GIS databases. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre-processing task) can be scaled in MapReduce—a widely-adopted parallel programming model for data intensive problems. The evaluation of our algorithms in a Hadoop cluster showed close to linear scalability in building R-tree indexes. Subsequently, we develop efficient algorithms for processing spatial queries with aspatial conditions. Novel techniques for simultaneously indexing spatial with textual and numeric data are developed to that end. Experimental evaluations with real-world, large spatial datasets measured query response times within the sub-second range for most cases, and up to a few seconds for a small number of cases, which is reasonable for interactive applications. Overall, the previous results show that the MapReduce parallel model is suitable for indexing tasks in spatial databases, and the adequate combination of spatial and aspatial attribute indexes can attain acceptable response times for interactive spatial queries with constraints on aspatial data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elemental and isotopic composition of leaves of the seagrassThalassia testudinum was highly variable across the 10,000 km2 and 8 years of this study. The data reported herein expand the reported range in carbon:nitrogen (C:N) and carbon:phosphorus (C:P) ratios and δ13C and δ15N values reported for this species worldwide; 13.2–38.6 for C:N and 411–2,041 for C:P. The 981 determinations in this study generated a range of −13.5‰ to −5.2‰ for δ13C and −4.3‰ to 9.4‰ for δ15N. The elemental and isotope ratios displayed marked seasonality, and the seasonal patterns could be described with a simple sine wave model. C:N, C:P, δ13C, and δ15N values all had maxima in the summer and minima in the winter. Spatial patterns in the summer maxima of these quantities suggest there are large differences in the relative availability of N and P across the study area and that there are differences in the processing and the isotopic composition of C and N. This work calls into question the interpretation of studies about nutrient cycling and food webs in estuaries based on few samples collected at one time, since we document natural variability greater than the signal often used to imply changes in the structure or function of ecosystems. The data and patterns presented in this paper make it clear that there is no threshold δ15N value for marine plants that can be used as an unambiguous indicator of human sewage pollution without a thorough understanding of local temporal and spatial variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Homomorphic encryption is a particular type of encryption method that enables computing over encrypted data. This has a wide range of real world ramifications such as being able to blindly compute a search result sent to a remote server without revealing its content. In the first part of this thesis, we discuss how database search queries can be made secure using a homomorphic encryption scheme based on the ideas of Gahi et al. Gahi’s method is based on the integer-based fully homomorphic encryption scheme proposed by Dijk et al. We propose a new database search scheme called the Homomorphic Query Processing Scheme, which can be used with the ring-based fully homomorphic encryption scheme proposed by Braserski. In the second part of this thesis, we discuss the cybersecurity of the smart electric grid. Specifically, we use the Homomorphic Query Processing scheme to construct a keyword search technique in the smart grid. Our work is based on the Public Key Encryption with Keyword Search (PEKS) method introduced by Boneh et al. and a Multi-Key Homomorphic Encryption scheme proposed by L´opez-Alt et al. A summary of the results of this thesis (specifically the Homomorphic Query Processing Scheme) is published at the 14th Canadian Workshop on Information Theory (CWIT).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this project, have been studied to determine the appropriate model to spatial, temporal and diversity of demersal fishes in the Sea of Oman, including Trichiuridae, Nemipteridae, Haemulidae, Arridae, Synodontidae, Batoidfishes, Carangidae, Scianidae, Carchariniformes and Serranidae. This research became operational from catch data during 2003 to 2013 (in 2007, due to the lack of ship failed). Processing and calculations was evaluated by using the software Excel, SPSS, Arc GIS and table curve 3D highest biomass and abundance was showed in strata A and C and 10-30 m depth layers was showed the best condition biomass. In other words, highest biomass was showed in the eastern region in the Oman Sea than the central and western regions. Batoidfishes and Trichiuridae had the highest biomass .Depth factors was showed a significant correlation with the biomass. Scianidae, Serranidae and Haemulidae were showed a large decline. Synodontidae was showed a very large increase. The largest of Shannon index belong to central and western region of the Oman Sea. The highest Shannon index was showed 10-20 and 50-100 m, respectively. The Distribution maps based on the biomass was analyzed by using Arc GIS software. So that were identified in the first time in a ten-year period and carefully catch stations any economic of aquatic group. In conclusion, the depth can be found in the pattern of distribution, abundance and diversity of fish from away the beach so that follow specific pattern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since last century, the rising interest of value-added and advanced functional materials has spurred a ceaseless development in terms of industrial processes and applications. Among the emerging technologies, thanks to their unique features and versatility in terms of supported processes, non-equilibrium plasma discharges appear as a key solvent-free, high-throughput and cost-efficient technique. Nevertheless, applied research studies are needed with the aim of addressing plasma potentialities optimizing devices and processes for future industrial applications. In this framework, the aim of this dissertation is to report on the activities carried out and the results achieved concerning the development and optimization of plasma techniques for nanomaterial synthesis and processing to be applied in the biomedical field. In the first section, the design and investigation of a plasma assisted process for the production of silver (Ag) nanostructured multilayer coatings exhibiting anti-biofilm and anti-clot properties is described. With the aim on enabling in-situ and on-demand deposition of Ag nanoparticles (NPs), the optimization of a continuous in-flight aerosol process for particle synthesis is reported. The stability and promising biological performances of deposited coatings spurred further investigation through in-vitro and in-vivo tests which results are reported and discussed. With the aim of addressing the unanswered questions and tuning NPs functionalities, the second section concerns the study of silver containing droplet conversion in a flow-through plasma reactor. The presented results, obtained combining different analysis techniques, support a formation mechanism based on droplet to particle conversion driven by plasma induced precursor reduction. Finally, the third section deals with the development of a simulative and experimental approach used to investigate the in-situ droplet evaporation inside the plasma discharge addressing the main contributions to liquid evaporation in the perspective of process industrial scale up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By means of continuous topology optimization, this paper discusses the influence of material gradation and layout in the overall stiffness behavior of functionally graded structures. The formulation is associated to symmetry and pattern repetition constraints, including material gradation effects at both global and local levels. For instance, constraints associated with pattern repetition are applied by considering material gradation either on the global structure or locally over the specific pattern. By means of pattern repetition, we recover previous results in the literature which were obtained using homogenization and optimization of cellular materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a systematic and logical study of the topology optimized design, microfabrication, and static/dynamic performance characterization of an electro-thermo-mechanical microgripper. The microgripper is designed using a topology optimization algorithm based on a spatial filtering technique and considering different penalization coefficients for different material properties during the optimization cycle. The microgripper design has a symmetric monolithic 2D structure which consists of a complex combination of rigid links integrating both the actuating and gripping mechanisms. The numerical simulation is performed by studying the effects of convective heat transfer, thermal boundary conditions at the fixed anchors, and microgripper performance considering temperature-dependent and independent material properties. The microgripper is fabricated from a 25 mm thick nickel foil using laser microfabrication technology and its static/dynamic performance is experimentally evaluated. The static and dynamic electro-mechanical characteristics are analyzed as step response functions with respect to tweezing/actuating displacements, applied current/power, and actual electric resistance. A microgripper prototype having overall dimensions of 1mm (L) X 2.5mm (W) is able to deliver the maximum tweezing and actuating displacements of 25.5 mm and 33.2 mm along X and Y axes, respectively, under an applied power of 2.32 W. Experimental performance is compared with finite element modeling simulation results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rheological behavior of milk cream was studied for different fat contents (0.10 to 0.31) and for a wide temperature range (2 and 87C) using a rotational rheometer. Newtonian behavior was observed, except for fat content between 0.20 and 0.31 and temperature between 2 and 33C, where viscoplastic behavior was remarkable. The rheological parameters (Newtonian viscosity, plastic viscosity and yield stress) and density were well correlated to temperature and fat content. Tube friction factor during flow of cream was experimentally obtained at various flow rates, temperatures and tube diameters (86 < Re < 2.3 x 104, 38 < Re(B) < 8.8 x 103, 1.1 x 103 < He < 6.7 x 103). The proposed correlations for density and rheological parameters were applied for the prediction of friction factor for laminar and turbulent flow of cream using well-known equations for Newtonian and viscoplastic flow. The good agreement between experimental and predicted values confirms the reliability of the proposed correlations for describing the flow behavior of cream. PRACTICAL APPLICATIONS This paper presents correlations for the calculation of density and rheological parameters (Newtonian viscosity, Bingham plastic viscosity and yield stress) of milk cream as functions of temperature (2-87C) and fat content (0.10-0.31). Because of the large temperature range, the proposed correlations are useful for process design and optimization in dairy processing. An example of practical application is presented in the text, where the correlations were applied for the prediction of friction factor for laminar and turbulent tube flow of cream using well-known equations for Newtonian and viscoplastic flow, which are summarized in the text. The comparison with experimental data obtained at various flow rates, temperatures and tube diameters showed a good agreement, which confirms the reliability of the proposed correlations.