19 resultados para Welfare To Work

em Universidad Politécnica de Madrid


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The European Higher Education Area (EHEA) has leaded to a change in the way the subjects are taught. One of the more important aspects of the EHEA is to support the autonomous study of the students. Taking into account this new approach, the virtual laboratory of the subject Mechanisms of the Aeronautical studies at the Technical University of Madrid is being migrated to an on-line scheme. This virtual laboratory consist on two practices: the design of cam-follower mechanisms and the design of trains of gears. Both practices are software applications that, in the current situation, need to be installed on each computer and the students carry out the practice at the computer classroom of the school under the supervision of a teacher. During this year the design of cam-follower mechanisms practice has been moved to a web application using Java and the Google Development Toolkit. In this practice the students has to design and study the running of a cam to perform a specific displacement diagram with a selected follower taking into account that the mechanism must be able to work properly at high speed regime. The practice has maintained its objectives in the new platform but to take advantage of the new methodology and try to avoid the inconveniences that the previous version had shown. Once the new practice has been ready, a pilot study has been carried out to compare both approaches: on-line and in-lab. This paper shows the adaptation of the cam and follower practice to an on-line methodology. Both practices are described and the changes that has been done to the initial one are shown. They are compared and the weak and strong points of each one are analyzed. Finally we explain the pilot study carried out, the students impression and the results obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The European Higher Education Area (EHEA) has leaded to a change in the way the subjects are taught. One of the more important aspects of the EHEA is to support the autonomous study of the students. Taking into account this new approach, the virtual laboratory of the subject Mechanisms of the Aeronautical studies at the Technical University of Madrid is being migrated to an on-line scheme. This virtual laboratory consist on two practices: the design of cam-follower mechanisms and the design of trains of gears. Both practices are software applications that, in the current situation, need to be installed on each computer and the students carry out the practice at the computer classroom of the school under the supervision of a teacher. During this year the design of cam-follower mechanisms practice has been moved to a web application using Java and the Google Development Toolkit. In this practice the students has to design and study the running of a cam to perform a specific displacement diagram with a selected follower taking into account that the mechanism must be able to work properly at high speed regime. The practice has maintained its objectives in the new platform but to take advantage of the new methodology and try to avoid the inconveniences that the previous version had shown. Once the new practice has been ready, a pilot study has been carried out to compare both approaches: on-line and in-lab. This paper shows the adaptation of the cam and follower practice to an on-line methodology. Both practices are described and the changes that has been done to the initial one are shown. They are compared and the weak and strong points of each one are analyzed. Finally we explain the pilot study carried out, the students impression and the results obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cognitive linguistics have conscientiously pointed out the pervasiveness of conceptual mappings, particularly as conceptual blending and integration, that underlie language and that are unconsciously used in everyday speech (Fauconnier 1997, Fauconnier & Turner 2002; Rohrer 2007; Grady, Oakley & Coulson 1999). Moreover, as a further development of this work, there is a growing interest in research devoted to the conceptual mappings that make up specialized technical disciplines. Lakoff & Núñez 2000, for example, have produced a major breakthrough on the understanding of concepts in mathematics, through conceptual metaphor and as a result not of purely abstract concepts but rather of embodiment. On the engineering and architecture front, analyses on the use of metaphor, blending and categorization in English and Spanish have likewise appeared in recent times (Úbeda 2001, Roldán 1999, Caballero 2003a, 2003b, Roldán & Ubeda 2006, Roldán & Protasenia 2007). The present paper seeks to show a number of significant conceptual mappings underlying the language of architecture and civil engineering that seem to shape the way engineers and architects communicate. In order to work with a significant segment of linguistic expressions in this field, a corpus taken from a widely used technical Spanish engineering journal article was collected and analysed. The examination of the data obtained indicates that many tokens make a direct reference to therapeutic conceptual mappings, highlighting medical domains such as diagnosing,treating and curing. Hence, the paper illustrates how this notion is instantiated by the corresponding bodily conceptual integration. In addition, we wish to underline the function of visual metaphors in the world of modern architecture by evoking parts of human or animal anatomy, and how this is visibly noticeable in contemporary buildings and public works structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Twelve years ago a group of teachers began to work in educational innovation. In 2002 we received an award for educational innovation, undergoing several stages. Recently, we have decided to focus on being teachers of educational innovation. We create a web scheduled in Joomla offering various services, among which we emphasize teaching courses of educational innovation. The “Instituto de Ciencias de la Educacion” in “Universidad Politécnica de Madrid” has recently incorporated two of these courses, which has been highly praised. These courses will be reissued in new calls, and we are going to offer them to more Universities. We are in contact with several institutions, radio programs, the UNESCO Chair of Mining and Industrial Heritage, and we are working with them in the creation of heritage courses using methods that we have developed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fractal and multifractal are concepts that have grown increasingly popular in recent years in the soil analysis, along with the development of fractal models. One of the common steps is to calculate the slope of a linear fit commonly using least squares method. This shouldn?t be a special problem, however, in many situations using experimental data the researcher has to select the range of scales at which is going to work neglecting the rest of points to achieve the best linearity that in this type of analysis is necessary. Robust regression is a form of regression analysis designed to circumvent some limitations of traditional parametric and non-parametric methods. In this method we don?t have to assume that the outlier point is simply an extreme observation drawn from the tail of a normal distribution not compromising the validity of the regression results. In this work we have evaluated the capacity of robust regression to select the points in the experimental data used trying to avoid subjective choices. Based on this analysis we have developed a new work methodology that implies two basic steps: ? Evaluation of the improvement of linear fitting when consecutive points are eliminated based on R pvalue. In this way we consider the implications of reducing the number of points. ? Evaluation of the significance of slope difference between fitting with the two extremes points and fitted with the available points. We compare the results applying this methodology and the common used least squares one. The data selected for these comparisons are coming from experimental soil roughness transect and simulated based on middle point displacement method adding tendencies and noise. The results are discussed indicating the advantages and disadvantages of each methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta Tesis Doctoral presenta las investigaciones y los trabajos desarrollados durante los años 2008 a 2012 para el análisis y diseño de un patrón primario de ruido térmico de banda ancha en tecnología coaxial. Para ubicar esta Tesis en su campo científico es necesario tomar conciencia de que la realización de mediciones fiables y trazables forma parte del sostenimiento del bienestar de una sociedad moderna y juega un papel crítico en apoyo de la competitividad económica, la fabricación y el comercio, así como de la calidad de vida. En el mundo moderno actual, una infraestructura de medición bien desarrollada genera confianza en muchas facetas de nuestra vida diaria, porque nos permite el desarrollo y fabricación de productos fiables, innovadores y de alta calidad; porque sustenta la competitividad de las industrias y su producción sostenible; además de contribuir a la eliminación de barreras técnicas y de dar soporte a un comercio justo, garantizar la seguridad y eficacia de la asistencia sanitaria, y por supuesto, dar respuesta a los grandes retos de la sociedad moderna en temas tan complicados como la energía y el medio ambiente. Con todo esto en mente se ha desarrollado un patrón primario de ruido térmico con el fin de aportar al sistema metrológico español un nuevo patrón primario de referencia capaz de ser usado para desarrollar mediciones fiables y trazables en el campo de la medida y calibración de dispositivos de ruido electromagnético de radiofrecuencia y microondas. Este patrón se ha planteado para que cumpla en el rango de 10 MHz a 26,5 GHz con las siguientes especificaciones: Salida nominal de temperatura de ruido aproximada de ~ 83 K. Incertidumbre de temperatura de ruido menor que ± 1 K en todo su rango de frecuencias. Coeficiente de reflexión en todo su ancho de banda de 0,01 a 26,5 GHz lo más bajo posible. Se ha divido esta Tesis Doctoral en tres partes claramente diferenciadas. La primera de ellas, que comprende los capítulos 1, 2, 3, 4 y 5, presenta todo el proceso de simulaciones y ajustes de los parámetros principales del dispositivo con el fin de dejar definidos los que resultan críticos en su construcción. A continuación viene una segunda parte compuesta por el capítulo 6 en donde se desarrollan los cálculos necesarios para obtener la temperatura de ruido a la salida del dispositivo. La tercera y última parte, capítulo 7, se dedica a la estimación de la incertidumbre de la temperatura de ruido del nuevo patrón primario de ruido obtenida en el capítulo anterior. Más concretamente tenemos que en el capítulo 1 se hace una exhaustiva introducción del entorno científico en donde se desarrolla este trabajo de investigación. Además se detallan los objetivos que se persiguen y se presenta la metodología utilizada para conseguirlos. El capítulo 2 describe la caracterización y selección del material dieléctrico para el anillo del interior de la línea de transmisión del patrón que ponga en contacto térmico los dos conductores del coaxial para igualar las temperaturas entre ambos y mantener la impedancia característica de todo el patrón primario de ruido. Además se estudian las propiedades dieléctricas del nitrógeno líquido para evaluar su influencia en la impedancia final de la línea de transmisión. En el capítulo 3 se analiza el comportamiento de dos cargas y una línea de aire comerciales trabajando en condiciones criogénicas. Se pretende con este estudio obtener la variación que se produce en el coeficiente de reflexión al pasar de temperatura ambiente a criogénica y comprobar si estos dispositivos resultan dañados por trabajar a temperaturas criogénicas; además se estudia si se modifica su comportamiento tras sucesivos ciclos de enfriamiento – calentamiento, obteniendo una cota de la variación para poder así seleccionar la carga que proporcione un menor coeficiente de reflexión y una menor variabilidad. En el capítulo 4 se parte del análisis de la estructura del anillo de material dieléctrico utilizada en la nota técnica NBS 1074 del NIST con el fin de obtener sus parámetros de dispersión que nos servirán para calcular el efecto que produce sobre el coeficiente de reflexión de la estructura coaxial completa. Además se realiza un estudio posterior con el fin de mejorar el diseño de la nota técnica NBS 1074 del NIST, donde se analiza el anillo de material dieléctrico, para posteriormente realizar modificaciones en la geometría de la zona donde se encuentra éste con el fin de reducir la reflexión que produce. Concretamente se estudia el ajuste del radio del conductor interior en la zona del anillo para que presente la misma impedancia característica que la línea. Y para finalizar se obtiene analíticamente la relación entre el radio del conductor interior y el radio de la transición de anillo térmico para garantizar en todo punto de esa transición la misma impedancia característica, manteniendo además criterios de robustez del dispositivo y de fabricación realistas. En el capítulo 5 se analiza el comportamiento térmico del patrón de ruido y su influencia en la conductividad de los materiales metálicos. Se plantean las posibilidades de que el nitrógeno líquido sea exterior a la línea o que éste penetre en su interior. En ambos casos, dada la simetría rotacional del problema, se ha simulado térmicamente una sección de la línea coaxial, es decir, se ha resuelto un problema bidimensional, aunque los resultados son aplicables a la estructura real tridimensional. Para la simulación térmica se ha empleado la herramienta PDE Toolbox de Matlab®. En el capítulo 6 se calcula la temperatura de ruido a la salida del dispositivo. Se parte del estudio de la aportación a la temperatura de ruido final de cada sección que compone el patrón. Además se estudia la influencia de las variaciones de determinados parámetros de los elementos que conforman el patrón de ruido sobre las características fundamentales de éste, esto es, el coeficiente de reflexión a lo largo de todo el dispositivo. Una vez descrito el patrón de ruido electromagnético se procede, en el capítulo 7, a describir los pasos seguidos para estimar la incertidumbre de la temperatura de ruido electromagnético a su salida. Para ello se utilizan dos métodos, el clásico de la guía para la estimación de la incertidumbre [GUM95] y el método de simulación de Monte Carlo. En el capítulo 8 se describen las conclusiones y lo logros conseguidos. Durante el desarrollo de esta Tesis Doctoral se ha obtenido un dispositivo novedoso susceptible de ser patentado, que ha sido registrado en la Oficina Española de Patentes y Marcas (O.E.P.M.) en Madrid, de conformidad con lo establecido en el artículo 20 de la Ley 11/1986, de 20 de Marzo, de Patentes, con el título Patrón Primario de Ruido Térmico de Banda Ancha (Referencia P-101061) con fecha 7 de febrero de 2011. ABSTRACT This Ph. D. work describes a number of investigations that were performed along the years 2008 to 2011, as a preparation for the study and design of a coaxial cryogenic reference noise standard. Reliable and traceable measurement underpins the welfare of a modern society and plays a critical role in supporting economic competitiveness, manufacturing and trade as well as quality of life. In our modern world, a well developed measurement infrastructure gives confidence in many aspects of our daily life, for example by enabling the development and manufacturing of reliable, high quality and innovative products; by supporting industry to be competitive and sustainable in its production; by removing technical barriers to trade and supporting fair trade; by ensuring safety and effectiveness of healthcare; by giving response to the major challenges in key sectors such energy and environment, etc. With all this in mind we have developed a primary standard thermal noise with the aim of providing the Spanish metrology system with a new primary standard for noise reference. This standard will allow development of reliable and traceable measurements in the field of calibration and measurement of electromagnetic noise RF and microwave devices. This standard has been designed to work in the frequency range from 10 MHz to 26.5 GHz, meeting the following specifications: 1. Noise temperature output is to be nominally ~ 83 K. 2. Noise temperature uncertainty less than ± 1 K in the frequency range from 0.01 to 26.5 GHz. 3. Broadband performance requires as low a reflection coefficient as possible from 0.01 to 26.5 GHz. The present Ph. D. work is divided into three clearly differentiated parts. The first one, which comprises Chapters 1 to 5, presents the whole process of simulation and adjustment of the main parameters of the device in order to define those of them which are critical for the manufacturing of the device. Next, the second part consists of Chapter 6 where the necessary computations to obtain the output noise temperature of the device are carried out. The third and last part, Chapter 7, is devoted to the estimation of the uncertainty related to the noise temperature of the noise primary standard as obtained in the preceding chapter. More specifically, Chapter 1 provides a thorough introduction to the scientific and technological environment where this research takes place. It also details the objectives to be achieved and presents the methodology used to achieve them. Chapter 2 describes the characterization and selection of the bead dielectric material inside the transmission line, intended to connect the two coaxial conductors equalizing the temperature between the two of them and thus keeping the characteristic impedance constant for the whole standard. In addition the dielectric properties of liquid nitrogen are analyzed in order to assess their influence on the impedance of the transmission line. Chapter 3 analyzes the behavior of two different loads and of a commercial airline when subjected to cryogenic working conditions. This study is intended to obtain the variation in the reflection coefficient when the temperature changes from room to cryogenic temperature, and to check whether these devices can be damaged as a result of working at cryogenic temperatures. Also we try to see whether the load changes its behavior after successive cycles of cooling / heating, in order to obtain a bound for the allowed variation of the reflection coefficient of the load. Chapter 4 analyzes the ring structure of the dielectric material used in the NBS technical note 1074 of NIST, in order to obtain its scattering parameters that will be used for computation of its effect upon the reflection coefficient of the whole coaxial structure. Subsequently, we perform a further investigation with the aim of improving the design of NBS technical note 1074 of NIST, and modifications are introduced in the geometry of the transition area in order to reduce the reflection it produces. We first analyze the ring, specifically the influence of the radius of inner conductor of the bead, and then make changes in its geometry so that it presents the same characteristic impedance as that of the line. Finally we analytically obtain the relationship between the inner conductor radius and the radius of the transition from ring, in order to ensure the heat flow through the transition thus keeping the same reflection coefficient, and at the same time meeting the robustness requirements and the feasibility of manufacturing. Chapter 5 analyzes the thermal behavior of the noise standard and its influence on the conductivity of metallic materials. Both possibilities are raised that the liquid nitrogen is kept outside the line or that it penetrates inside. In both cases, given the rotational symmetry of the structure, we have simulated a section of coaxial line, i.e. the equivalent two-dimensional problem has been resolved, although the results are applicable to the actual three-dimensional structure. For thermal simulation Matlab™ PDE Toolbox has been used. In Chapter 6 we compute the output noise temperature of the device. The starting point is the analysis of the contribution to the overall noise temperature of each section making up the standard. Moreover the influence of the variations in the parameters of all elements of the standard is analyzed, specifically the variation of the reflection coefficient along the entire device. Once the electromagnetic noise standard has been described and analyzed, in Chapter 7 we describe the steps followed to estimate the uncertainty of the output electromagnetic noise temperature. This is done using two methods, the classic analytical approach following the Guide to the Estimation of Uncertainty [GUM95] and numerical simulations made with the Monte Carlo method. Chapter 8 discusses the conclusions and achievements. During the development of this thesis, a novel device was obtained which was potentially patentable, and which was finally registered through the Spanish Patent and Trademark Office (SPTO) in Madrid, in accordance with the provisions of Article 20 of Law 11/1986 about Patents, dated March 20th, 1986. It was registered under the denomination Broadband Thermal Noise Primary Standard (Reference P-101061) dated February 7th, 2011.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although the aim of empirical software engineering is to provide evidence for selecting the appropriate technology, it appears that there is a lack of recognition of this work in industry. Results from empirical research only rarely seem to find their way to company decision makers. If information relevant for software managers is provided in reports on experiments, such reports can be considered as a source of information for them when they are faced with making decisions about the selection of software engineering technologies. To bridge this communication gap between researchers and professionals, we propose characterizing the information needs of software managers in order to show empirical software engineering researchers which information is relevant for decision-making and thus enable them to make this information available. We empirically investigated decision makers? information needs to identify which information they need to judge the appropriateness and impact of a software technology. We empirically developed a model that characterizes these needs. To ensure that researchers provide relevant information when reporting results from experiments, we extended existing reporting guidelines accordingly.We performed an experiment to evaluate our model with regard to its effectiveness. Software managers who read an experiment report according to the proposed model judged the technology?s appropriateness significantly better than those reading a report about the same experiment that did not explicitly address their information needs. Our research shows that information regarding a technology, the context in which it is supposed to work, and most importantly, the impact of this technology on development costs and schedule as well as on product quality is crucial for decision makers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When we look to perform a work for developing a framework to create a business and take it correctly, there are always some persons looking as a challenge those bases and finding a mistake. The way to work in these situations is not a matter of law, is a matter of devoting time to identify these situations. It is always said that the evil goes a step ahead. The business ethics have been altered for quite time by some would-be entrepreneurs. These people have learned to play with business ethics to show your business as prosperous as something that is sought to highlight and adulterate their results quickly. Once the company reaches an international dimension, many companies take on global responsibility and, in these cases where you can see if the objective has been to obtain a rapid capital increase or growth is in line with its proportions. A business ethics is based on establishing a strong base so that interest is encouraged from an early time. Good staff, organizational level should be achieved and not only at the company but, out of the company too. Thus, you can create a secure base to convince potential investors and employees about the business. There are no freeways in business ethics and all fast track can be or a genius or leads to failure. We must find where these jumps are occurring, such errors or corrections to business ethics and their rules. Thus we can differentiate a company or an entrepreneur who is working correctly from the cloaking. Starting from the basics of business ethics and studying the different levels from the personal to the prospect that the company shows in the world. Lets see where these changes are occurring and how we can fight against them and anticipate the market to possible cases of fraud or strange movements seeking to attract the unwary

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although the aim of empirical software engineering is to provide evidence for selecting the appropriate technology, it appears that there is a lack of recognition of this work in industry. Results from empirical research only rarely seem to find their way to company decision makers. If information relevant for software managers is provided in reports on experiments, such reports can be considered as a source of information for them when they are faced with making decisions about the selection of software engineering technologies. To bridge this communication gap between researchers and professionals, we propose characterizing the information needs of software managers in order to show empirical software engineering researchers which information is relevant for decision-making and thus enable them to make this information available. We empirically investigated decision makers? information needs to identify which information they need to judge the appropriateness and impact of a software technology. We empirically developed a model that characterizes these needs. To ensure that researchers provide relevant information when reporting results from experiments, we extended existing reporting guidelines accordingly. We performed an experiment to evaluate our model with regard to its effectiveness. Software managers who read an experiment report according to the proposed model judged the technology?s appropriateness significantly better than those reading a report about the same experiment that did not explicitly address their information needs. Our research shows that information regarding a technology, the context in which it is supposed to work, and most importantly, the impact of this technology on development costs and schedule as well as on product quality is crucial for decision makers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper try to prove how artisans c ould discover all uniform tilings and very interesting others us ing artisanal combinatorial pro cedures without having to use mathematical procedures out of their reac h. Plane Geometry started up his way through History by means of fundamental drawing tools: ruler and co mpass. Artisans used same tools to carry out their orna mental patterns but at some point they began to work manually using physical representations of fi gures or tiles previously drawing by means of ruler and compass. That is an important step for craftsman because this way provides tools that let him come in the world of symmetry opera tions and empirical knowledge of symmetry groups. Artisans started up to pr oduce little wooden, ceramic or clay tiles and began to experiment with them by means of joining pieces whether edge to edge or vertex to vertex in that way so it can c over the plane without gaps. Economy in making floor or ceramic tiles could be most important reason to develop these procedures. This empiric way to develop tilings led not only to discover all uniform tilings but later discovering of aperiodic tilings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to minimize car-based trips, transport planners have been particularly interested in understanding the factors that explain modal choices. Transport modelling literature has been increasingly aware that socioeconomic attributes and quantitative variables are not sufficient to characterize travelers and forecast their travel behavior. Recent studies have also recognized that users’ social interactions and land use patterns influence travel behavior, especially when changes to transport systems are introduced; but links between international and Spanish perspectives are rarely dealt with. The overall objective of the thesis is to develop a stepped methodology that integrate diverse perspectives to evaluate the willingness to change patterns of urban mobility in Madrid, based on four steps: (1st) analysis of causal relationships between both objective and subjective personal variables, and travel behavior to capture pro-car and pro-public transport intentions; (2nd) exploring the potential influence of individual trip characteristics and social influence variables on transport mode choice; (3rd) identifying built environment dimensions on travel behavior; and (4th) exploring the potential influence on transport mode choice of extrinsic characteristics of individual trip using panel data, land use variables using spatial characteristics and social influence variables. The data used for this thesis have been collected from a two panel smartphone-based survey (n=255 and 190 respondents, respectively) carried out in Madrid. Although the steps above are mainly methodological, the application to the area of Madrid allows deriving important results that can be directly used to forecast travel demand and to evaluate the benefits of specific policies that might be implemented in the area. The results demonstrated, respectively: (1st) transport policy actions are more likely to be effective when pro-car intention has been disrupted first; (2nd) the consideration of “helped” and “voluntary” users as tested here could have a positive and negative impact, respectively, on the use of public transport; (3rd) the importance of density, design, diversity and accessibility underlying dimensions responsible for land use variables; and (4th) there are clearly different types of combinations of social interactions, land use and time frame on travel behavior studies. Finally, with the objective to study the impact of demand measures to change urban mobility behavior, those previous results have been considered in a unique way, a hybrid discrete choice model has been used on a 5th step. Then it can be concluded that urban mobility behavior is not only ruled by the maximum utility criterion, but also by a strong psychological-environment concept, developed without the mediation of cognitive processes during choice, i.e., many people using public transport on their way to work do not do it for utilitarian reasons, but because no other choice is available. Regarding built environment dimensions, the more diversity place of residence, the more difficult the use of public transport or walking.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Extracting opinions and emotions from text is becoming increasingly important, especially since the advent of micro-blogging and social networking. Opinion mining is particularly popular and now gathers many public services, datasets and lexical resources. Unfortunately, there are few available lexical and semantic resources for emotion recognition that could foster the development of new emotion aware services and applications. The diversity of theories of emotion and the absence of a common vocabulary are two of the main barriers to the development of such resources. This situation motivated the creation of Onyx, a semantic vocabulary of emotions with a focus on lexical resources and emotion analysis services. It follows a linguistic Linked Data approach, it is aligned with the Provenance Ontology, and it has been integrated with the Lexicon Model for Ontologies (lemon), a popular RDF model for representing lexical entries. This approach also means a new and interesting way to work with different theories of emotion. As part of this work, Onyx has been aligned with EmotionML and WordNet-Affect.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research into software engineering teams focuses on human and social team factors. Social psychology deals with the study of team formation and has found that personality factors and group processes such as team climate are related to team effectiveness. However, there are only a handful of empirical studies dealing with personality and team climate and their relationship to software development team effectiveness. Objective We present aggregate results of a twice replicated quasi-experiment that evaluates the relationships between personality, team climate, product quality and satisfaction in software development teams. Method Our experimental study measures the personalities of team members based on the Big Five personality traits (openness, conscientiousness, extraversion, agreeableness, neuroticism) and team climate factors (participative safety, support for innovation, team vision and task orientation) preferences and perceptions. We aggregate the results of the three studies through a meta-analysis of correlations. The study was conducted with students. Results The aggregation of results from the baseline experiment and two replications corroborates the following findings. There is a positive relationship between all four climate factors and satisfaction in software development teams. Teams whose members score highest for the agreeableness personality factor have the highest satisfaction levels. The results unveil a significant positive correlation between the extraversion personality factor and software product quality. High participative safety and task orientation climate perceptions are significantly related to quality. Conclusions First, more efficient software development teams can be formed heeding personality factors like agreeableness and extraversion. Second, the team climate generated in software development teams should be monitored for team member satisfaction. Finally, aspects like people feeling safe giving their opinions or encouraging team members to work hard at their job can have an impact on software quality. Software project managers can take advantage of these factors to promote developer satisfaction and improve the resulting product.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.