899 resultados para Integrated circuits Very large scale integration Design and construction.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose WEAVE, a geographical 2D/3D routing protocol that maintains information on a small number of waypoints and checkpoints for forwarding packets to any destination. Nodes obtain the routing information from partial traces gathered in incoming packets and use a system of checkpoints along with the segments of routes to weave end-to-end paths close to the shortest ones. WEAVE does not generate any control traffic, it is suitable for routing in both 2D and 3D networks, and does not require any strong assumption on the underlying network graph such as the Unit Disk or a Planar Graph. WEAVE compares favorably with existing protocols in both testbed experiments and simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although analyses of large-scale land acquisitions (LSLA) often contain an explicit or implicit normative judgment about such projects, they rarely deduce such judgment from a nuanced balancing of pros and cons. This paper uses assessments about a well-researched LSLA in Sierra Leone to show that a utilitarian approach tends to lead to the conclusion that positive effects prevail, whereas deontological approaches lead to an emphasis on negative aspects. LSLA are probably the most radical land-use change in the history of humankind. This process of radical transformation poses a challenge for balanced evaluations. Thus, we line out a framework that focuses on the options of local residents but sets boundaries of acceptability through the core contents of human rights. In addition, systemic implications of a project need to be regarded.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. During the most recent perihelion passage in 2009 of comet 67P/Churyumov-Gerasimenko (67P), ground-based observations showed an anisotropic dust coma where jet-like features were detected at similar to 1.3 AU from the Sun. The current perihelion passage is exceptional as the Rosetta spacecraft is monitoring the nucleus activity since March 2014, when a clear dust coma was already surrounding the nucleus at 4.3 AU from the Sun. Subsequently, the OSIRIS camera also witnessed an outburst in activity between April 27 and 30, and since mid-July, the dust coma at rh similar to 3.7-3.6 AU preperihelion is clearly non-isotropic, pointing to the existence of dust jet-like features. Aims. We aim to ascertain on the nucleus surface the origin of the dust jet-like features detected as early as in mid-July 2014. This will help to establish how the localized comet nucleus activity compares with that seen in previous apparitions and will also help following its evolution as the comet approaches its perihelion, at which phase most of the jets were detected from ground-based observations. Determining these areas also allows locating them in regions on the nucleus with spectroscopic or geomorphological distinct characteristics. Methods. Three series of dust images of comet 67P obtained with the Wide Angle Camera (WAC) of the OSIRIS instrument onboard the Rosetta spacecraft were processed with different enhancement techniques. This was made to clearly show the existence of jet-like features in the dust coma, whose appearance toward the observer changed as a result of the rotation of the comet nucleus and of the changing observing geometry from the spacecraft. The position angles of these features in the coma together with information on the observing geometry, nucleus shape, and rotation, allowed us to determine the most likely locations on the nucleus surface where the jets originate from. Results. Geometrical tracing of jet sources indicates that the activity of the nucleus of 67P gave rise during July and August 2014 to large-scale jet-like features from the Hapi, Hathor, Anuket, and Aten regions, confirming that active regions may be present on the nucleus localized at 60. northern latitude as deduced from previous comet apparitions. There are also hints that large-scale jets observed from the ground are possibly composed, at their place of origin on the nucleus surface, of numerous small-scale features.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cerebellum is the major brain structure that contributes to our ability to improve movements through learning and experience. We have combined computer simulations with behavioral and lesion studies to investigate how modification of synaptic strength at two different sites within the cerebellum contributes to a simple form of motor learning—Pavlovian conditioning of the eyelid response. These studies are based on the wealth of knowledge about the intrinsic circuitry and physiology of the cerebellum and the straightforward manner in which this circuitry is engaged during eyelid conditioning. Thus, our simulations are constrained by the well-characterized synaptic organization of the cerebellum and further, the activity of cerebellar inputs during simulated eyelid conditioning is based on existing recording data. These simulations have allowed us to make two important predictions regarding the mechanisms underlying cerebellar function, which we have tested and confirmed with behavioral studies. The first prediction describes the mechanisms by which one of the sites of synaptic modification, the granule to Purkinje cell synapses (gr → Pkj) of the cerebellar cortex, could generate two time-dependent properties of eyelid conditioning—response timing and the ISI function. An empirical test of this prediction using small, electrolytic lesions of the cerebellar cortex revealed the pattern of results predicted by the simulations. The second prediction made by the simulations is that modification of synaptic strength at the other site of plasticity, the mossy fiber to deep nuclei synapses (mf → nuc), is under the control of Purkinje cell activity. The analysis predicts that this property should confer mf → nuc synapses with resistance to extinction. Thus, while extinction processes erase plasticity at the first site, residual plasticity at mf → nuc synapses remains. The residual plasticity at the mf → nuc site confers the cerebellum with the capability for rapid relearning long after the learned behavior has been extinguished. We confirmed this prediction using a lesion technique that reversibly disconnected the cerebellar cortex at various stages during extinction and reacquisition of eyelid responses. The results of these studies represent significant progress toward a complete understanding of how the cerebellum contributes to motor learning. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a large health care system, the importance of accurate information as feedback mechanisms about its performance is necessary on many levels from the senior level management to service level managers for valid decision-making purposes. The implementation of dashboards is one way to remedy the problem of data overload by providing up-to-date, accurate, and concise information. As this health care system seeks to have an organized, systematic review mechanism in place, dashboards are being created in a variety of the hospital service departments to monitor performance indicators. The Infection Control Administration of this health care system is one that does not currently utilize a dashboard but seeks to implement one. ^ The purpose of this project is to research and design a clinical dashboard for the Infection Control Administration. The intent is that the implementation and usefulness of the clinical dashboard translates into improvement in the measurement of health care quality.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high-resolution stratigraphy is essential toward deciphering climate variability in detail and understanding causality arguments of events in earth history. Because the highly dynamic middle to late Eocene provides a suitable testing ground for carbon cycle models for a waning warm world, an accurate time scale is needed to decode climate-driving mechanisms. Here we present new results from ODP Site 1260 (Leg 207) which covers a unique expanded middle Eocene section (magnetochrons C18r to C20r, late Lutetian to early Bartonian) of the tropical western Atlantic including the chron C19r transient hyperthermal event and the Middle Eocene Climate Optimum (MECO). To establish a detailed cyclostratigraphy we acquired a distinctive iron intensity records by XRF scanning Site 1260 cores. We revise the shipboard composite section, establish a cyclostratigraphy and use the exceptional eccentricity modulated precession cycles for orbital tuning. The new astrochronology revises the age of magnetic polarity chrons C19n to C20n, validates the position of very long eccentricity minima at 40.2 and 43.0 Ma in the orbital solutions, and extends the Astronomically Tuned Geological Time Scale back to 44 Ma. For the first time the new data provide clear evidence for an orbital pacing of the chron C19r event and a likely involvement of the very long eccentricity cycle contributing to the evolution of the MECO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study combined data on fin whale Balaenoptera physalus, humpback whale Megaptera novaeangliae, minke whale B. acutorostrata, and sei whale B. borealis sightings from large-scale visual aerial and ship-based surveys (248 and 157 sightings, respectively) with synoptic acoustic sampling of krill Meganyctiphanes norvegica and Thysanoessa sp. abundance in September 2005 in West Greenland to examine the relationships between whales and their prey. Krill densities were obtained by converting relationships of volume backscattering strengths at multiple frequencies to a numerical density using an estimate of krill target strength. Krill data were vertically integrated in 25 m depth bins between 0 and 300 m to obtain water column biomass (g/m**2) and translated to density surfaces using ordinary kriging. Standard regression models (Generalized Additive Modeling, GAM, and Generalized Linear Modeling, GLM) were developed to identify important explanatory variables relating the presence, absence, and density of large whales to the physical and biological environment and different survey platforms. Large baleen whales were concentrated in 3 focal areas: (1) the northern edge of Lille Hellefiske bank between 65 and 67°N, (2) north of Paamiut at 63°N, and (3) in South Greenland between 60 and 61° N. There was a bimodal pattern of mean krill density between depths, with one peak between 50 and 75 m (mean 0.75 g/m**2, SD 2.74) and another between 225 and 275 m (mean 1.2 to 1.3 g/m**2, SD 23 to 19). Water column krill biomass was 3 times higher in South Greenland than at any other site along the coast. Total depth-integrated krill biomass was 1.3 x 10**9 (CV 0.11). Models indicated the most important parameter in predicting large baleen whale presence was integrated krill abundance, although this relationship was only significant for sightings obtained on the ship survey. This suggests that a high degree of spatio-temporal synchrony in observations is necessary for quantifying predator-prey relationships. Krill biomass was most predictive of whale presence at depths >150 m, suggesting a threshold depth below which it is energetically optimal for baleen whales to forage on krill in West Greenland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. The spatial distribution of individual plants within a population and the population’s genetic structure are determined by several factors, like dispersal, reproduction mode or biotic interactions. The role of interspecific interactions in shaping the spatial genetic structure of plant populations remains largely unknown. 2. Species with a common evolutionary history are known to interact more closely with each other than unrelated species due to the greater number of traits they share. We hypothesize that plant interactions may shape the fine genetic structure of closely related congeners. 3. We used spatial statistics (georeferenced design) and molecular techniques (ISSR markers) to understand how two closely related congeners, Thymus vulgaris (widespread species) and T. loscosii (narrow endemic) interact at the local scale. Specific cover, number of individuals of both study species and several community attributes were measured in a 10 × 10 m plot. 4. Both species showed similar levels of genetic variation, but differed in their spatial genetic structure. Thymus vulgaris showed spatial aggregation but no spatial genetic structure, while T. loscosii showed spatial genetic structure (positive genetic autocorrelation) at short distances. The spatial pattern of T. vulgaris’ cover showed significant dissociation with that of T. loscosii. The same was true between the spatial patterns of the cover of T. vulgaris and the abundance of T. loscosii and between the abundance of each species. Most importantly, we found a correlation between the genetic structure of T. loscosii and the abundance of T. vulgaris: T. loscosii plants were genetically more similar when they were surrounded by a similar number of T. vulgaris plants. 5. Synthesis. Our results reveal spatially complex genetic structures of both congeners at small spatial scales. The negative association among the spatial patterns of the two species and the genetic structure found for T. loscosii in relation to the abundance of T. vulgaris indicate that competition between the two species may account for the presence of adapted ecotypes of T. loscosii to the abundance of a competing congeneric species. This suggests that the presence and abundance of close congeners can influence the genetic spatial structure of plant species at fine scales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing experimental facilities for the Internet of Things (IoT) world is of paramount importance to materialise the Future Internet (FI) vision. The level of maturity achieved at the networking level in Sensor and Actuator networks (SAN) justifies the increasing demand on the research community to shift IoT testbed facilities from the network to the service and information management areas. In this paper we present an Experimental Platform fulfilling these needs by: integrating heterogeneous SAN infrastructures in a homogeneous way; providing mechanisms to handle information, and facilitating the development of experimental services. It has already been used to deploy applications in three different field trials: smart metering, smart places and environmental monitoring and it will be one of the components over which the SmartSantander project, that targets a large-scale IoT experimental facility, will rely on

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are many the requirements that modern power converters should fulfill. Most of the applications where these converters are used, demand smaller converters with high efficiency, improved power density and a fast dynamic response. For instance, loads like microprocessors demand aggressive current steps with very high slew rates (100A/mus and higher); besides, during these load steps, the supply voltage of the microprocessor should be kept within tight limits in order to ensure its correct performance. The accomplishment of these requirements is not an easy task; complex solutions like advanced topologies - such as multiphase converters- as well as advanced control strategies are often needed. Besides, it is also necessary to operate the converter at high switching frequencies and to use capacitors with high capacitance and low ESR. Improving the dynamic response of power converters does not rely only on the control strategy but also the power topology should be suited to enable a fast dynamic response. Moreover, in later years, a fast dynamic response does not only mean accomplishing fast load steps but output voltage steps are gaining importance as well. At least, two applications that require fast voltage changes can be named: Low power microprocessors. In these devices, the voltage supply is changed according to the workload and the operating frequency of the microprocessor is changed at the same time. An important reduction in voltage dependent losses can be achieved with such changes. This technique is known as Dynamic Voltage Scaling (DVS). Another application where important energy savings can be achieved by means of changing the supply voltage are Radio Frequency Power Amplifiers. For example, RF architectures based on ‘Envelope Tracking’ and ‘Envelope Elimination and Restoration’ techniques can take advantage of voltage supply modulation and accomplish important energy savings in the power amplifier. However, in order to achieve these efficiency improvements, a power converter with high efficiency and high enough bandwidth (hundreds of kHz or even tens of MHz) is necessary in order to ensure an adequate supply voltage. The main objective of this Thesis is to improve the dynamic response of DC-DC converters from the point of view of the power topology. And the term dynamic response refers both to the load steps and the voltage steps; it is also interesting to modulate the output voltage of the converter with a specific bandwidth. In order to accomplish this, the question of what is it that limits the dynamic response of power converters should be answered. Analyzing this question leads to the conclusion that the dynamic response is limited by the power topology and specifically, by the filter inductance of the converter which is found in series between the input and the output of the converter. The series inductance is the one that determines the gain of the converter and provides the regulation capability. Although the energy stored in the filter inductance enables the regulation and the capability of filtering the output voltage, it imposes a limitation which is the concern of this Thesis. The series inductance stores energy and prevents the current from changing in a fast way, limiting the slew rate of the current through this inductor. Different solutions are proposed in the literature in order to reduce the limit imposed by the filter inductor. Many publications proposing new topologies and improvements to known topologies can be found in the literature. Also, complex control strategies are proposed with the objective of improving the dynamic response in power converters. In the proposed topologies, the energy stored in the series inductor is reduced; examples of these topologies are Multiphase converters, Buck converter operating at very high frequency or adding a low impedance path in parallel with the series inductance. Control techniques proposed in the literature, focus on adjusting the output voltage as fast as allowed by the power stage; examples of these control techniques are: hysteresis control, V 2 control, and minimum time control. In some of the proposed topologies, a reduction in the value of the series inductance is achieved and with this, the energy stored in this magnetic element is reduced; less stored energy means a faster dynamic response. However, in some cases (as in the high frequency Buck converter), the dynamic response is improved at the cost of worsening the efficiency. In this Thesis, a drastic solution is proposed: to completely eliminate the series inductance of the converter. This is a more radical solution when compared to those proposed in the literature. If the series inductance is eliminated, the regulation capability of the converter is limited which can make it difficult to use the topology in one-converter solutions; however, this topology is suitable for power architectures where the energy conversion is done by more than one converter. When the series inductor is eliminated from the converter, the current slew rate is no longer limited and it can be said that the dynamic response of the converter is independent from the switching frequency. This is the main advantage of eliminating the series inductor. The main objective, is to propose an energy conversion strategy that is done without series inductance. Without series inductance, no energy is stored between the input and the output of the converter and the dynamic response would be instantaneous if all the devices were ideal. If the energy transfer from the input to the output of the converter is done instantaneously when a load step occurs, conceptually it would not be necessary to store energy at the output of the converter (no output capacitor COUT would be needed) and if the input source is ideal, the input capacitor CIN would not be necessary. This last feature (no CIN with ideal VIN) is common to all power converters. However, when the concept is actually implemented, parasitic inductances such as leakage inductance of the transformer and the parasitic inductance of the PCB, cannot be avoided because they are inherent to the implementation of the converter. These parasitic elements do not affect significantly to the proposed concept. In this Thesis, it is proposed to operate the converter without series inductance in order to improve the dynamic response of the converter; however, on the other side, the continuous regulation capability of the converter is lost. It is said continuous because, as it will be explained throughout the Thesis, it is indeed possible to achieve discrete regulation; a converter without filter inductance and without energy stored in the magnetic element, is capable to achieve a limited number of output voltages. The changes between these output voltage levels are achieved in a fast way. The proposed energy conversion strategy is implemented by means of a multiphase converter where the coupling of the phases is done by discrete two-winding transformers instead of coupledinductors since transformers are, ideally, no energy storing elements. This idea is the main contribution of this Thesis. The feasibility of this energy conversion strategy is first analyzed and then verified by simulation and by the implementation of experimental prototypes. Once the strategy is proved valid, different options to implement the magnetic structure are analyzed. Three different discrete transformer arrangements are studied and implemented. A converter based on this energy conversion strategy would be designed with a different approach than the one used to design classic converters since an additional design degree of freedom is available. The switching frequency can be chosen according to the design specifications without penalizing the dynamic response or the efficiency. Low operating frequencies can be chosen in order to favor the efficiency; on the other hand, high operating frequencies (MHz) can be chosen in order to favor the size of the converter. For this reason, a particular design procedure is proposed for the ‘inductorless’ conversion strategy. Finally, applications where the features of the proposed conversion strategy (high efficiency with fast dynamic response) are advantageus, are proposed. For example, in two-stage power architectures where a high efficiency converter is needed as the first stage and there is a second stage that provides the fine regulation. Another example are RF power amplifiers where the voltage is modulated following an envelope reference in order to save power; in this application, a high efficiency converter, capable of achieving fast voltage steps is required. The main contributions of this Thesis are the following: The proposal of a conversion strategy that is done, ideally, without storing energy in the magnetic element. The validation and the implementation of the proposed energy conversion strategy. The study of different magnetic structures based on discrete transformers for the implementation of the proposed energy conversion strategy. To elaborate and validate a design procedure. To identify and validate applications for the proposed energy conversion strategy. It is important to remark that this work is done in collaboration with Intel. The particular features of the proposed conversion strategy enable the possibility of solving the problems related to microprocessor powering in a different way. For example, the high efficiency achieved with the proposed conversion strategy enables it as a good candidate to be used for power conditioning, as a first stage in a two-stage power architecture for powering microprocessors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is easy to get frustrated at spoken conversational agents (SCAs), perhaps because they seem to be callous. By and large, the quality of human-computer interaction is affected due to the inability of the SCAs to recognise and adapt to user emotional state. Now with the mass appeal of artificially-mediated communication, there has been an increasing need for SCAs to be socially and emotionally intelligent, that is, to infer and adapt to their human interlocutors’ emotions on the fly, in order to ascertain an affective, empathetic and naturalistic interaction. An enhanced quality of interaction would reduce users’ frustrations and consequently increase their satisfactions. These reasons have motivated the development of SCAs towards including socio-emotional elements, turning them into affective and socially-sensitive interfaces. One barrier to the creation of such interfaces has been the lack of methods for modelling emotions in a task-independent environment. Most emotion models for spoken dialog systems are task-dependent and thus cannot be used “as-is” in different applications. This Thesis focuses on improving this, in which it concerns computational modeling of emotion, personality and their interrelationship for task-independent autonomous SCAs. The generation of emotion is driven by needs, inspired by human’s motivational systems. The work in this Thesis is organised in three stages, each one with its own contribution. The first stage involved defining, integrating and quantifying the psychological-based motivational and emotional models sourced from. Later these were transformed into a computational model by implementing them into software entities. The computational model was then incorporated and put to test with an existing SCA host, a HiFi-control agent. The second stage concerned automatic prediction of affect, which has been the main challenge towards the greater aim of infusing social intelligence into the HiFi agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. In this stage, we attempted to address part of this challenge by considering the roles of user satisfaction ratings and conversational/dialog features as the respective target and predictors in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. The final stage concerned the evaluation of the emotional model through the HiFi agent. A series of user studies with 70 subjects were conducted in a real-time environment, each in a different phase and with its own conditions. All the studies involved the comparisons between the baseline non-modified and the modified agent. The findings have gone some way towards enhancing our understanding of the utility of emotion in spoken dialog systems in several ways; first, an SCA should not express its emotions blindly, albeit positive. Rather, it should adapt its emotions to user states. Second, low performance in an SCA may be compensated by the exploitation of emotion. Third, the expression of emotion through the exploitation of prosody could better improve users’ perceptions of an SCA compared to exploiting emotions through just lexical contents. Taken together, these findings not only support the success of the emotional model, but also provide substantial evidences with respect to the benefits of adding emotion in an SCA, especially in mitigating users’ frustrations and ultimately improving their satisfactions. Resumen Es relativamente fácil experimentar cierta frustración al interaccionar con agentes conversacionales (Spoken Conversational Agents, SCA), a menudo porque parecen ser un poco insensibles. En general, la calidad de la interacción persona-agente se ve en cierto modo afectada por la incapacidad de los SCAs para identificar y adaptarse al estado emocional de sus usuarios. Actualmente, y debido al creciente atractivo e interés de dichos agentes, surge la necesidad de hacer de los SCAs unos seres cada vez más sociales y emocionalmente inteligentes, es decir, con capacidad para inferir y adaptarse a las emociones de sus interlocutores humanos sobre la marcha, de modo que la interacción resulte más afectiva, empática y, en definitiva, natural. Una interacción mejorada en este sentido permitiría reducir la posible frustración de los usuarios y, en consecuencia, mejorar el nivel de satisfacción alcanzado por los mismos. Estos argumentos justifican y motivan el desarrollo de nuevos SCAs con capacidades socio-emocionales, dotados de interfaces afectivas y socialmente sensibles. Una de las barreras para la creación de tales interfaces ha sido la falta de métodos de modelado de emociones en entornos independientes de tarea. La mayoría de los modelos emocionales empleados por los sistemas de diálogo hablado actuales son dependientes de tarea y, por tanto, no pueden utilizarse "tal cual" en diferentes dominios o aplicaciones. Esta tesis se centra precisamente en la mejora de este aspecto, la definición de modelos computacionales de las emociones, la personalidad y su interrelación para SCAs autónomos e independientes de tarea. Inspirada en los sistemas motivacionales humanos en el ámbito de la psicología, la tesis propone un modelo de generación/producción de la emoción basado en necesidades. El trabajo realizado en la presente tesis está organizado en tres etapas diferenciadas, cada una con su propia contribución. La primera etapa incluyó la definición, integración y cuantificación de los modelos motivacionales de partida y de los modelos emocionales derivados a partir de éstos. Posteriormente, dichos modelos emocionales fueron plasmados en un modelo computacional mediante su implementación software. Este modelo computacional fue incorporado y probado en un SCA anfitrión ya existente, un agente con capacidad para controlar un equipo HiFi, de alta fidelidad. La segunda etapa se orientó hacia el reconocimiento automático de la emoción, aspecto que ha constituido el principal desafío en relación al objetivo mayor de infundir inteligencia social en el agente HiFi. En los últimos años, los estudios sobre reconocimiento de emociones a partir de la voz han pasado de emplear datos actuados a usar datos reales en los que la presencia u observación de emociones se produce de una manera mucho más sutil. El reconocimiento de emociones bajo estas condiciones resulta mucho más complicado y esta dificultad se pone de manifiesto en tareas tales como el etiquetado y el aprendizaje automático. En esta etapa, se abordó el problema del reconocimiento de las emociones del usuario a partir de características o métricas derivadas del propio diálogo usuario-agente. Gracias a dichas métricas, empleadas como predictores o indicadores del grado o nivel de satisfacción alcanzado por el usuario, fue posible discriminar entre satisfacción y frustración, las dos emociones prevalentes durante la interacción usuario-agente. La etapa final corresponde fundamentalmente a la evaluación del modelo emocional por medio del agente Hifi. Con ese propósito se llevó a cabo una serie de estudios con usuarios reales, 70 sujetos, interaccionando con diferentes versiones del agente Hifi en tiempo real, cada uno en una fase diferente y con sus propias características o capacidades emocionales. En particular, todos los estudios realizados han profundizado en la comparación entre una versión de referencia del agente no dotada de ningún comportamiento o característica emocional, y una versión del agente modificada convenientemente con el modelo emocional propuesto. Los resultados obtenidos nos han permitido comprender y valorar mejor la utilidad de las emociones en los sistemas de diálogo hablado. Dicha utilidad depende de varios aspectos. En primer lugar, un SCA no debe expresar sus emociones a ciegas o arbitrariamente, incluso aunque éstas sean positivas. Más bien, debe adaptar sus emociones a los diferentes estados de los usuarios. En segundo lugar, un funcionamiento relativamente pobre por parte de un SCA podría compensarse, en cierto modo, dotando al SCA de comportamiento y capacidades emocionales. En tercer lugar, aprovechar la prosodia como vehículo para expresar las emociones, de manera complementaria al empleo de mensajes con un contenido emocional específico tanto desde el punto de vista léxico como semántico, ayuda a mejorar la percepción por parte de los usuarios de un SCA. Tomados en conjunto, los resultados alcanzados no sólo confirman el éxito del modelo emocional, sino xv que constituyen además una evidencia decisiva con respecto a los beneficios de incorporar emociones en un SCA, especialmente en cuanto a reducir el nivel de frustración de los usuarios y, en última instancia, mejorar su satisfacción.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An aerodynamic optimization of the ICE 2 high-speed train nose in term of front wind action sensitivity is carried out in this paper. The nose is parametrically defined by Be?zier Curves, and a three-dimensional representation of the nose is obtained using thirty one design variables. This implies a more complete parametrization, allowing the representation of a real model. In order to perform this study a genetic algorithm (GA) is used. Using a GA involves a large number of evaluations before finding such optimal. Hence it is proposed the use of metamodels or surrogate models to replace Navier-Stokes solver and speed up the optimization process. Adaptive sampling is considered to optimize surrogate model fitting and minimize computational cost when dealing with a very large number of design parameters. The paper introduces the feasi- bility of using GA in combination with metamodels for real high-speed train geometry optimization.