942 resultados para Quality models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing methods

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, a wide offer of mobile augmented reality (mAR) applications is available at the market, and the user base of mobile AR-capable devices -smartphones- is rapidly increasing. Nevertheless, likewise to what happens in other mobile segments, business models to put mAR in value are not clearly defined yet. In this paper, we focus on sketching the big picture of the commercial offer of mAR applications, in order to inspire a posterior analysis of business models that may successfully support the evolution of mAR. We have gathered more than 400 mAR applications from Android Market, and analyzed the offer as a whole, taking into account some technology aspects, pricing schemes and user adoption factors. Results show, for example, that application providers are not expecting to generate revenues per direct download, although they are producing high-quality applications, well rated by the users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The province of Salta is located the Northwest of Argentina in the border with Bolivia, Chile and Paraguay. Its Capital is the city of Salta that concentrates half of the inhabitants of the province and has grown to 600000 hab., from a small active Spanish town well founded in 1583. The city is crossed by the Arenales River descending from close mountains at North, source of water and end of sewers. But with actual growing it has become a focus of infection and of remarkable unhealthiness. It is necessary to undertake a plan for the recovery of the river, directed to the attainment of the well-being and to improve the life?s quality of the Community. The fundamental idea of the plan is to obtain an ordering of the river basin and an integral management of the channel and its surroundings, including the cleaning out. The improvement of the water?s quality, the healthiness of the surroundings and the improvement of the environment, must go hand by hand with the development of sport activities, of relaxation, tourism, establishment of breeding grounds, kitchen gardens, micro enterprises with clean production and other actions that contribute to their benefit by the society, that being a basic factor for their care and sustainable use. The present pollution is organic, chemical, industrial, domestic, due to the disposition of sweepings and sewer effluents that affects not only the flora and small fauna, destroying the biodiversity, but also to the health of people living in their margins. Within the plan it will be necessary to consider, besides hydric and environmental cleaning and the prevention of floods, the planning of the extraction of aggregates, the infrastructure and consolidation of margins works and the arrangement of all the river basin. It will be necessary to consider the public intervention at state, provincial and local level, and the private intervention. In the model it has been necessary to include the sub-model corresponding to the election of the entity to be the optimal instrument to reach the proposed objectives, giving an answer to the social, environmental and economic requirements. For that the authors have used multi-criteria decision methods to qualify and select alternatives, and for the programming of their implementation. In the model the authors have contemplated the short, average and long term actions. They conform a Paretooptimal alternative which secures the ordering, integral and suitable management of the basin of the Arenales River, focusing on its passage by the city of Salta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Customer evolution and changes in consumers, determine the fact that the quality of the interface between marketing and sales may represent a true competitive advantage for the firm. Building on multidimensional theoretical and empirical models developed in Europe and on social network analysis, the organizational interface between the marketing and sales departments of a multinational high-growth company with operations in Argentina, Uruguay and Paraguay is studied. Both, attitudinal and social network measures of information exchange are used to make operational the nature and quality of the interface and its impact on performance. Results show the existence of a positive relationship of formalization, joint planning, teamwork, trust and information transfer on interface quality, as well as a positive relationship between interface quality and business performance. We conclude that efficient design and organizational management of the exchange network are essential for the successful performance of consumer goods companies that seek to develop distinctive capabilities to adapt to markets that experience vertiginous changes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson’s Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson’s patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson’s disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many industries that use highly technological solutions to improve quality in all of their products. The steel industry is one example. Several automatic surface-inspection systems are used in the steel industry to identify various types of defects and to help operators decide whether to accept, reroute, or downgrade the material, subject to the assessment process. This paper focuses on promoting a strategy that considers all defects in an integrated fashion. It does this by managing the uncertainty about the exact position of a defect due to different process conditions by means of Gaussian additive influence functions. The relevance of the approach is in making possible consistency and reliability between surface inspection systems. The results obtained are an increase in confidence in the automatic inspection system and an ability to introduce improved prediction and advanced routing models. The prediction is provided to technical operators to help them in their decision-making process. It shows the increase in improvement gained by reducing the 40 % of coils that are downgraded at the hot strip mill because of specific defects. In addition, this technology facilitates an increase of 50 % in the accuracy of the estimate of defect survival after the cleaning facility in comparison to the former approach. The proposed technology is implemented by means of software-based, multi-agent solutions. It makes possible the independent treatment of information, presentation, quality analysis, and other relevant functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many cities in Europe have difficulties to meet the air quality standards set by the European legislation, most particularly the annual mean Limit Value for NO2. Road transport is often the main source of air pollution in urban areas and therefore, there is an increasing need to estimate current and future traffic emissions as accurately as possible. As a consequence, a number of specific emission models and emission factors databases have been developed recently. They present important methodological differences and may result in largely diverging emission figures and thus may lead to alternative policy recommendations. This study compares two approaches to estimate road traffic emissions in Madrid (Spain): the COmputer Programme to calculate Emissions from Road Transport (COPERT4 v.8.1) and the Handbook Emission Factors for Road Transport (HBEFA v.3.1), representative of the ‘average-speed’ and ‘traffic situation’ model types respectively. The input information (e.g. fleet composition, vehicle kilometres travelled, traffic intensity, road type, etc.) was provided by the traffic model developed by the Madrid City Council along with observations from field campaigns. Hourly emissions were computed for nearly 15 000 road segments distributed in 9 management areas covering the Madrid city and surroundings. Total annual NOX emissions predicted by HBEFA were a 21% higher than those of COPERT. The discrepancies for NO2 were lower (13%) since resulting average NO2/NOX ratios are lower for HBEFA. The larger differences are related to diesel vehicle emissions under “stop & go” traffic conditions, very common in distributor/secondary roads of the Madrid metropolitan area. In order to understand the representativeness of these results, the resulting emissions were integrated in an urban scale inventory used to drive mesoscale air quality simulations with the Community Multiscale Air Quality (CMAQ) modelling system (1 km2 resolution). Modelled NO2 concentrations were compared with observations through a series of statistics. Although there are no remarkable differences between both model runs, the results suggest that HBEFA may overestimate traffic emissions. However, the results are strongly influenced by methodological issues and limitations of the traffic model. This study was useful to provide a first alternative estimate to the official emission inventory in Madrid and to identify the main features of the traffic model that should be improved to support the application of an emission system based on “real world” emission factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present research is focused on the application of hyperspectral images for the supervision of quality deterioration in ready to use leafy spinach during storage (Spinacia oleracea). Two sets of samples of packed leafy spinach were considered: (a) a first set of samples was stored at 20 °C (E-20) in order to accelerate the degradation process, and these samples were measured the day of reception in the laboratory and after 2 days of storage; (b) a second set of samples was kept at 10 °C (E-10), and the measurements were taken throughout storage, beginning the day of reception and repeating the acquisition of Images 3, 6 and 9 days later. Twenty leaves per test were analyzed. Hyperspectral images were acquired with a push-broom CCD camera equipped with a spectrograph VNIR (400–1000 nm). Calibration set of spectra was extracted from E-20 samples, containing three classes of degradation: class A (optimal quality), class B and class C (maximum deterioration). Reference average spectra were defined for each class. Three models, computed on the calibration set, with a decreasing degree of complexity were compared, according to their ability for segregating leaves at different quality stages (fresh, with incipient and non-visible symptoms of degradation, and degraded): spectral angle mapper distance (SAM), partial least squares discriminant analysis models (PLS-DA), and a non linear index (Leafy Vegetable Evolution, LEVE) combining five wavelengths were included among the previously selected by CovSel procedure. In sets E-10 and E-20, artificial images of the membership degree according to the distance of each pixel to the reference classes, were computed assigning each pixel to the closest reference class. The three methods were able to show the degradation of the leaves with storage time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analysis of different models used to assess the quality of formative actions, considering classroom learning and distance education courses. Taking as starting point one of the analyzed models, the paper sets out the necessity of developing a new model that could measure the quality of a blended formation process, by selecting the applicable indicators and proposing some new. The model is composed of seven different categories, which include a sum of thirty five indicators. They will be used to represent courses quality level in Kiviat?s diagrams. This model is currently being put into practice in a real university environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ATM, SDH or satellite have been used in the last century as the contribution network of Broadcasters. However the attractive price of IP networks is changing the infrastructure of these networks in the last decade. Nowadays, IP networks are widely used, but their characteristics do not offer the level of performance required to carry high quality video under certain circumstances. Data transmission is always subject to errors on line. In the case of streaming, correction is attempted at destination, while on transfer of files, retransmissions of information are conducted and a reliable copy of the file is obtained. In the latter case, reception time is penalized because of the low priority this type of traffic on the networks usually has. While in streaming, image quality is adapted to line speed, and line errors result in a decrease of quality at destination, in the file copy the difference between coding speed vs line speed and errors in transmission are reflected in an increase of transmission time. The way news or audiovisual programs are transferred from a remote office to the production centre depends on the time window and the type of line available; in many cases, it must be done in real time (streaming), with the resulting image degradation. The main purpose of this work is the workflow optimization and the image quality maximization, for that reason a transmission model for multimedia files adapted to JPEG2000, is described based on the combination of advantages of file transmission and those of streaming transmission, putting aside the disadvantages that these models have. The method is based on two patents and consists of the safe transfer of the headers and data considered to be vital for reproduction. Aside, the rest of the data is sent by streaming, being able to carry out recuperation operations and error concealment. Using this model, image quality is maximized according to the time window. In this paper, we will first give a briefest overview of the broadcasters requirements and the solutions with IP networks. We will then focus on a different solution for video file transfer. We will take the example of a broadcast center with mobile units (unidirectional video link) and regional headends (bidirectional link), and we will also present a video file transfer file method that satisfies the broadcaster requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge about the quality characteristics (QoS) of service com- positions is crucial for determining their usability and economic value. Ser- vice quality is usually regulated using Service Level Agreements (SLA). While end-to-end SLAs are well suited for request-reply interactions, more complex, decentralized, multiparticipant compositions (service choreographies) typ- ically involve multiple message exchanges between stateful parties and the corresponding SLAs thus encompass several cooperating parties with interde- pendent QoS. The usual approaches to determining QoS ranges structurally (which are by construction easily composable) are not applicable in this sce- nario. Additionally, the intervening SLAs may depend on the exchanged data. We present an approach to data-aware QoS assurance in choreographies through the automatic derivation of composable QoS models from partici- pant descriptions. Such models are based on a message typing system with size constraints and are derived using abstract interpretation. The models ob- tained have multiple uses including run-time prediction, adaptive participant selection, or design-time compliance checking. We also present an experimen- tal evaluation and discuss the benefits of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling is an essential tool for the development of atmospheric emission abatement measures and air quality plans. Most often these plans are related to urban environments with high emission density and population exposure. However, air quality modeling in urban areas is a rather challenging task. As environmental standards become more stringent (e.g. European Directive 2008/50/EC), more reliable and sophisticated modeling tools are needed to simulate measures and plans that may effectively tackle air quality exceedances, common in large urban areas across Europe, particularly for NO2. This also implies that emission inventories must satisfy a number of conditions such as consistency across the spatial scales involved in the analysis, consistency with the emission inventories used for regulatory purposes and versatility to match the requirements of different air quality and emission projection models. This study reports the modeling activities carried out in Madrid (Spain) highlighting the atmospheric emission inventory development and preparation as an illustrative example of the combination of models and data needed to develop a consistent air quality plan at urban level. These included a series of source apportionment studies to define contributions from the international, national, regional and local sources in order to understand to what extent local authorities can enforce meaningful abatement measures. Moreover, source apportionment studies were conducted in order to define contributions from different sectors and to understand the maximum feasible air quality improvement that can be achieved by reducing emissions from those sectors, thus targeting emission reduction policies to the most relevant activities. Finally, an emission scenario reflecting the effect of such policies was developed and the associated air quality was modeled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes an analytical approach to determine what degree of accuracy is required in the definition of the rail vehicle models used for dynamic simulations. This way it would be possible to know in advance how the results of simulations may be altered due to the existence of errors in the creation of rolling stock models, whilst also identifying their critical parameters. This would make it possible to maximize the time available to enhance dynamic analysis and focus efforts on factors that are strictly necessary.In particular, the parameters related both to the track quality and to the rolling contact were considered in this study. With this aim, a sensitivity analysis was performed to assess their influence on the vehicle dynamic behaviour. To do this, 72 dynamic simulations were performed modifying, one at a time, the track quality, the wheel-rail friction coefficient and the equivalent conicity of both new and worn wheels. Three values were assigned to each parameter, and two wear states were considered for each type of wheel, one for new wheels and another one for reprofiled wheels.After processing the results of these simulations, it was concluded that all the parameters considered show very high influence, though the friction coefficient shows the highest influence. Therefore, it is recommended to undertake any future simulation job with measured track geometry and track irregularities, measured wheel profiles and normative values of wheel-rail friction coefficient.