28 resultados para dependency of attributes

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of a charging policy for heavy goods vehicles in European Union (EU) member countries has been imposed to reflect costs of construction and maintenance of infrastructure as well as externalities such as congestion, accidents and environmental impact. In this context, EU countries approved the Eurovignette directive (1999/62/EC) and its amending directive (2006 /38/EC) which established a legal framework to regulate the system of tolls. Even if that regulation seek s to increase the efficien cy of freight, it will trigger direct and indirect effects on Spain’s regional economies by increasing transport costs. This paper presents the development of a multiregional Input-Output methodology (MRIO) with elastic trade coefficients to predict in terregional trade, using transport attributes integrated in multinomial logit models. This method is highly useful to carry out an ex-ante evaluation of transport policies because it involves road freight transport cost sensitivity, and determine regional distributive and substitution economic effect s of countries like Spain, characterized by socio-demographic and economic attributes, differentiated region by region. It will thus be possible to determine cost-effective strategies, given different policy scenarios. MRIO mode l would then be used to determine the impact on the employment rate of imposing a charge in the Madrid-Sevilla corridor in Spain. This methodology is important for measuring the impact on the employment rate since it is one of the main macroeconomic indicators of Spain’s regional and national economic situation. A previous research developed (DESTINO) using a MRIO method estimated employment impacts of road pricing policy across Spanish regions considering a fuel tax charge (€/liter) in the entire shortest cost path network for freight transport. Actually, it found that the variation in employment is expected to be substantial for some regions, and negligible for others. For example, in this Spanish case study of regional employment has showed reductions between 16.1% (Rioja) and 1.4% (Madrid region). This variation range seems to be related to either the intensity of freight transport in each region or dependency of regions to transport intensive economic sect ors. In fact, regions with freight transport intensive sectors will lose more jobs while regions with a predominantly service economy undergo a fairly insignificant loss of employment. This paper is focused on evaluating a freight transport vehicle-kilometer charge (€/km) in a non-tolled motorway corridor (A-4) between Madrid-Sevilla (517 Km.). The consequences of the road pricing policy implementation show s that the employment reductions are not as high as the diminution stated in the previous research because this corridor does not affect the whole freight transport system of Spain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The electroencephalograph (EEG) signal is one of the most widely used signals in the biomedicine field due to its rich information about human tasks. This research study describes a new approach based on i) build reference models from a set of time series, based on the analysis of the events that they contain, is suitable for domains where the relevant information is concentrated in specific regions of the time series, known as events. In order to deal with events, each event is characterized by a set of attributes. ii) Discrete wavelet transform to the EEG data in order to extract temporal information in the form of changes in the frequency domain over time- that is they are able to extract non-stationary signals embedded in the noisy background of the human brain. The performance of the model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Once admitted the advantages of object-based classification compared to pixel-based classification; the need of simple and affordable methods to define and characterize objects to be classified, appears. This paper presents a new methodology for the identification and characterization of objects at different scales, through the integration of spectral information provided by the multispectral image, and textural information from the corresponding panchromatic image. In this way, it has defined a set of objects that yields a simplified representation of the information contained in the two source images. These objects can be characterized by different attributes that allow discriminating between different spectral&textural patterns. This methodology facilitates information processing, from a conceptual and computational point of view. Thus the vectors of attributes defined can be used directly as training pattern input for certain classifiers, as for example artificial neural networks. Growing Cell Structures have been used to classify the merged information.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantitative descriptive analysis (QDA) is used to describe the nature and the intensity of sensory properties from a single evaluation of a product, whereas temporal dominance of sensation (TDS) is primarily used to identify dominant sensory properties over time. Previous studies with TDS have focused on model systems, but this is the first study to use a sequential approach, i.e. QDA then TDS in measuring sensory properties of a commercial product category, using the same set of trained assessors (n = 11). The main objectives of this study were to: (1) investigate the benefits of using a sequential approach of QDA and TDS and (2) to explore the impact of the sample composition on taste and flavour perceptions in blackcurrant squashes. The present study has proposed an alternative way of determining the choice of attributes for TDS measurement based on data obtained from previous QDA studies, where available. Both methods indicated that the flavour profile was primarily influenced by the level of dilution and complexity of sample composition combined with blackcurrant juice content. In addition, artificial sweeteners were found to modify the quality of sweetness and could also contribute to bitter notes. Using QDA and TDS in tandem was shown to be more beneficial than each just on its own enabling a more complete sensory profile of the products.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The singularities in Dromo are characterized in this paper, both from an analytical and a numerical perspective. When the angular momentum vanishes, Dromo may encounter a singularity in the evolution equations. The cancellation of the angular momentum occurs in very speci?c situations and may be caused by the action of strong perturbations. The gravitational attraction of a perturbing planet may lead to rapid changes in the angular momentum of the particle. In practice, this situation may be encountered during deep planetocentric ?ybys. The performance of Dromo is evaluated in di?erent scenarios. First, Dromo is validated for integrating the orbit of Near Earth Asteroids. Resulting errors are of the order of the diameter of the asteroid. Second, a set of theoretical ?ybys are designed for analyzing the performance of the formulation in the vicinity of the singularity. New sets of Dromo variables are proposed in order to minimize the dependency of Dromo on the angular momentum. A slower time scale is introduced, leading to a more stable description of the ?yby phase. Improvements in the overall performance of the algorithm are observed when integrating orbits close to the singularity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-­‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite the benefits for exc hanging experiences among planners at the global scale, the strong context dependency of urban planning creates in many instances significant difficulties to extrapolate experiences from one geographical context to the other. If progress is to be achieved in international cooperation programmes, differences and commonalities should be assessed before la unching any academic initiative. In that respect, this p aper makes a brief foresight exercise on how future trends and challenges, which may affect the urban pl anning field, should be taken into consideration according to two different contexts: Spain and Latin America. A segmentation matrix is used to expose a nd discuss the different effects of future trends on both contexts. Some tentative conclusions are drawn for the development of international educational programmes

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Deep level defects in n-type unintentionally doped a-plane MgxZn1−xO, grown by molecular beam epitaxy on r-plane sapphire were fully characterized using deep level optical spectroscopy (DLOS) and related methods. Four compositions of MgxZn1−xO were examined with x = 0.31, 0.44, 0.52, and 0.56 together with a control ZnO sample. DLOS measurements revealed the presence of five deep levels in each Mg-containing sample, having energy levels of Ec − 1.4 eV, 2.1 eV, 2.6 V, and Ev + 0.3 eV and 0.6 eV. For all Mg compositions, the activation energies of the first three states were constant with respect to the conduction band edge, whereas the latter two revealed constant activation energies with respect to the valence band edge. In contrast to the ternary materials, only three levels, at Ec − 2.1 eV, Ev + 0.3 eV, and 0.6 eV, were observed for the ZnO control sample in this systematically grown series of samples. Substantially higher concentrations of the deep levels at Ev + 0.3 eV and Ec − 2.1 eV were observed in ZnO compared to the Mg alloyed samples. Moreover, there is a general invariance of trap concentration of the Ev + 0.3 eV and 0.6 eV levels on Mg content, while at least and order of magnitude dependency of the Ec − 1.4 eV and Ec − 2.6 eV levels in Mg alloyed samples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The influence of anemometer rotor shape parameters, such as the cups’ front area or their center rotation radius on the anemometer’s performance was analyzed. This analysis was based on calibrations performed on two different anemometers (one based on magnet system output signal, and the other one based on an opto-electronic system output signal), tested with 21 different rotors. The results were compared to the ones resulting from classical analytical models. The results clearly showed a linear dependency of both calibration constants, the slope and the offset, on the cups’ center rotation radius, the influence of the front area of the cups also being observed. The analytical model of Kondo et al. was proved to be accurate if it is based on precise data related to the aerodynamic behavior of a rotor’s cup.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new methodology, simple and affordable, for the definition and characterization of objects at different scales in high spatial resolution images. The objects have been generated by integrating texturally and spectrally homogeneous segments. The former have been obtained from the segmentation of Wavelet coefficients of the panchromatic image. The multi-scale character of this transform has yielded texturally homogeneous segments of different sizes for each of the scales. The spectrally homogeneous segments have been obtained by segmenting the classified corresponding multispectral image. In this way, it has been defined a set of objects characterized by different attributes, which give to the objects a semantic meaning, allowing to determine the similarities and differences between them. To demonstrate the capabilities of the methodology proposed, different experiments of unsupervised classification of a Quickbird image have been carried out, using different subsets of attributes and 1-D ascendant hierarchical classifier. Obtained results have shown the capability of the proposed methodology for separating semantic objects at different scales, as well as, its advantages against pixel-based image interpretation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The European strategies on energy have been searching for years to reduce the dependency of Europe from fossil fuels. Underlying this effort, there exist geopolitical, economic, environmental reasons and the reality that oil reservoirs will dry out some day. Renewable energies have become a milestone of this strategy because their huge potential has emerged after years of uncertainty. One of the better developed renewable sources, which is nearer to commercial maturity is solar-thermal energy. In this paper, the current state of this technology will be described as well as the developments that may be expected in the short and mid terms, including the thermoelectric solar megaproject DESERTEC, a German proposal to ensure energy resources to the mayor areas of the EU-MENA countries. The reader will acquire a picture of the current state of the market, of the technical challenges already achieved and of the remaining ones.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces a semantic language developed with the objective to be used in a semantic analyzer based on linguistic and world knowledge. Linguistic knowledge is provided by a Combinatorial Dictionary and several sets of rules. Extra-linguistic information is stored in an Ontology. The meaning of the text is represented by means of a series of RDF-type triples of the form predicate (subject, object). Semantic analyzer is one of the options of the multifunctional ETAP-3 linguistic processor. The analyzer can be used for Information Extraction and Question Answering. We describe semantic representation of expressions that provide an assessment of the number of objects involved and/or give a quantitative evaluation of different types of attributes. We focus on the following aspects: 1) parametric and non-parametric attributes; 2) gradable and non-gradable attributes; 3) ontological representation of different classes of attributes; 4) absolute and relative quantitative assessment; 5) punctual and interval quantitative assessment; 6) intervals with precise and fuzzy boundaries

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabajo de Tesis se desarrolla en el marco de los escenarios de ejecución distribuida de servicios móviles y contribuye a la definición y desarrollo del concepto de usuario prosumer. El usuario prosumer se caracteriza por utilizar su teléfono móvil para crear, proveer y ejecutar servicios. Este nuevo modelo de usuario contribuye al avance de la sociedad de la información, ya que el usuario prosumer se transforma de creador de contenidos a creador de servicios (estos últimos formados por contenidos y la lógica para acceder a ellos, procesarlos y representarlos). El objetivo general de este trabajo de Tesis es la provisión de un modelo de creación, distribución y ejecución de servicios para entorno móvil que permita a los usuarios no programadores (usuarios prosumer), pero expertos en un determinado dominio, crear y ejecutar sus propias aplicaciones y servicios. Para ello se definen, desarrollan e implementan metodologías, procesos, algoritmos y mecanismos adaptables a dominios específicos, para construir entornos de ejecución distribuida de servicios móviles para usuarios prosumer. La provisión de herramientas de creación adaptadas a usuarios no expertos es una tendencia actual que está siendo desarrollada en distintos trabajos de investigación. Sin embargo, no se ha propuesto una metodología de desarrollo de servicios que involucre al usuario prosumer en el proceso de diseño, desarrollo, implementación y validación de servicios. Este trabajo de Tesis realiza un estudio de las metodologías y tecnologías más innovadoras relacionadas con la co‐creación y utiliza este análisis para definir y validar una metodología que habilita al usuario para ser el responsable de la creación de servicios finales. Siendo los entornos móviles prosumer (mobile prosumer environments) una particularización de los entornos de ejecución distribuida de servicios móviles, en este trabajo se tesis se investiga en técnicas de adaptación, distribución, coordinación de servicios y acceso a recursos identificando como requisitos las problemáticas de este tipo de entornos y las características de los usuarios que participan en los mismos. Se contribuye a la adaptación de servicios definiendo un modelo de variabilidad que soporte la interdependencia entre las decisiones de personalización de los usuarios, incorporando mecanismos de guiado y detección de errores. La distribución de servicios se implementa utilizando técnicas de descomposición en árbol SPQR, cuantificando el impacto de separar cualquier servicio en distintos dominios. Considerando el plano de comunicaciones para la coordinación en la ejecución de servicios distribuidos hemos identificado varias problemáticas, como las pérdidas de enlace, conexiones, desconexiones y descubrimiento de participantes, que resolvemos utilizando técnicas de diseminación basadas en publicación subscripción y algoritmos Gossip. Para lograr una ejecución flexible de servicios distribuidos en entorno móvil, soportamos la adaptación a cambios en la disponibilidad de los recursos, proporcionando una infraestructura de comunicaciones para el acceso uniforme y eficiente a recursos. Se han realizado validaciones experimentales para evaluar la viabilidad de las soluciones propuestas, definiendo escenarios de aplicación relevantes (el nuevo universo inteligente, prosumerización de servicios en entornos hospitalarios y emergencias en la web de la cosas). Abstract This Thesis work is developed in the framework of distributed execution of mobile services and contributes to the definition and development of the concept of prosumer user. The prosumer user is characterized by using his mobile phone to create, provide and execute services. This new user model contributes to the advancement of the information society, as the prosumer is transformed from producer of content, to producer of services (consisting of content and logic to access them, process them and represent them). The overall goal of this Thesis work is to provide a model for creation, distribution and execution of services for the mobile environment that enables non‐programmers (prosumer users), but experts in a given domain, to create and execute their own applications and services. For this purpose I define, develop and implement methodologies, processes, algorithms and mechanisms, adapted to specific domains, to build distributed environments for the execution of mobile services for prosumer users. The provision of creation tools adapted to non‐expert users is a current trend that is being developed in different research works. However, it has not been proposed a service development methodology involving the prosumer user in the process of design, development, implementation and validation of services. This thesis work studies innovative methodologies and technologies related to the co‐creation and relies on this analysis to define and validate a methodological approach that enables the user to be responsible for creating final services. Being mobile prosumer environments a specific case of environments for distributed execution of mobile services, this Thesis work researches in service adaptation, distribution, coordination and resource access techniques, and identifies as requirements the challenges of such environments and characteristics of the participating users. I contribute to service adaptation by defining a variability model that supports the dependency of user personalization decisions, incorporating guiding and error detection mechanisms. Service distribution is implemented by using decomposition techniques based on SPQR trees, quantifying the impact of separating any service in different domains. Considering the communication level for the coordination of distributed service executions I have identified several problems, such as link losses, connections, disconnections and discovery of participants, which I solve using dissemination techniques based on publish‐subscribe communication models and Gossip algorithms. To achieve a flexible distributed service execution in mobile environments, I support adaptation to changes in the availability of resources, while providing a communication infrastructure for the uniform and efficient access to resources. Experimental validations have been conducted to assess the feasibility of the proposed solutions, defining relevant application scenarios (the new intelligent universe, service prosumerization in hospitals and emergency situations in the web of things).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This letter presents a temperature-sensing technique on the basis of the temperature dependency of MOSFET leakage currents. To mitigate the effects of process variation, the ratio of two different leakage current measurements is calculated. Simulations show that this ratio is robust to process spread. The resulting sensor is quite small-0.0016 mm2 including an analog-to-digital conversion-and very energy efficient, consuming less than 640 pJ/conversion. After a two-point calibration, the accuracy in a range of 40°C-110°C is less than 1.5°C , which makes the technique suitable for thermal management applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Use of a (bare) conductive tape electrically floating in LEO as an effective e-beam source that produces artificial auroras, and is free of problems that have marred standard beams, is considered. Ambient ions impacting the tape with KeV energies over most of its length liberate secondary electrons, which race down the magnetic field and excite neutrals in the E-layer, resulting in auroral emissions. The tether would operate at night-time with both a power supply and a plasma contactor off; power and contactor would be on at daytime for reboost. The optimal tape thickness yielding a minimum mass for an autonomous system is determined; the alternative use of an electric thruster for day reboost, depending on mission duration, is discussed. Measurements of emission brightness from the spacecraft could allow determination of the (neutral) density vertical profile in the critical E-layer; the flux and energy in the beam, varying along the tether, allow imaging line-of-sight integrated emissions that mix effects with altitude-dependent neutral density and lead to a brightness peak in the beam footprint at the E-layer. Difficulties in tomographic inversion, to determine the density profile, result from beam broadening, due to elastic collisions, which flattens the peak, and to the highly nonlinear functional dependency of line-of-sight brightness. Some dynamical issues are discussed.