879 resultados para Quality evaluation and certification
Resumo:
En América Latina la injerencia de los organismos internacionales en la política nacional constituye un fenómeno que no se puede soslayar. Se trata de una cuestión candente en el contexto actual y remite a la presencia creciente en los países de la región de una serie de papers, documentos, boletines que, generados en el seno de dichas entidades, señalan desafíos presentes y futuros que deberá atender América Latina. En el campo educativo, marcan las orientaciones de política para la región y representan discursos sobre la educación que es menester analizar críticamente. El objetivo de este trabajo es analizar una serie de ideas, recomendaciones y retóricas político-pedagógicas de los Organismos Internacionales respecto de la denominada calidad de la educación en América Latina con la intención de poner de manifiesto su productividad discursiva en tanto su incidencia está ligada a significantes que operan ordenando las disputas político-discursivas del campo educativo
Resumo:
La preocupación por la evaluación de la calidad en la educación superior surge inicialmente, en la Argentina, a comienzos de los años 90 en forma paralela a los desarrollos Iberoamericanos. Es entonces cuando, en el universo de las políticas de la calidad, se introduce la selección y denominación de componentes dentro de los lineamientos, guías o modelos de evaluación. Es así como aparecen los conceptos de evaluación, acreditación, certificación, etc. vinculados también con los principales modelos de gestión de la calidad. El presente trabajo tiene dos objetivos, el primero es exponer una de las conclusiones de la investigación realizada sobre la evaluación del componente biblioteca en el contexto de la evaluación externa de las universidades argentinas llevada a cabo por la Comisión Nacional de Evaluación y Acreditación Universitaria (CONEAU) en cumplimiento de la política de calidad determinada por la Ley de Educación Superior (LES). Esta conclusión está relacionada con la importancia de la consistencia y uniformidad en la denominación de componentes en los modelos de evaluación. Con el segundo, se trata de clarificar la terminología vinculada con los procesos de evaluación y gestión de la calidad. La metodología utilizada para el cumplimiento del primer objetivo se basó en el análisis de los 42 informes que la CONEAU publicó entre 1998-2006. Respecto del segundo la recolección de datos se apoyó en el análisis de diferentes fuentes documentales, lo cual facilitó la estructuración del contenido del trabajo. Se concluye que es esencial mantener la claridad en las definiciones, la consistencia terminológica y evitar la alternancia de términos.
Resumo:
En América Latina la injerencia de los organismos internacionales en la política nacional constituye un fenómeno que no se puede soslayar. Se trata de una cuestión candente en el contexto actual y remite a la presencia creciente en los países de la región de una serie de papers, documentos, boletines que, generados en el seno de dichas entidades, señalan desafíos presentes y futuros que deberá atender América Latina. En el campo educativo, marcan las orientaciones de política para la región y representan discursos sobre la educación que es menester analizar críticamente. El objetivo de este trabajo es analizar una serie de ideas, recomendaciones y retóricas político-pedagógicas de los Organismos Internacionales respecto de la denominada calidad de la educación en América Latina con la intención de poner de manifiesto su productividad discursiva en tanto su incidencia está ligada a significantes que operan ordenando las disputas político-discursivas del campo educativo
Resumo:
This work is motivated in providing and evaluating a fusion algorithm of remotely sensed images, i.e. the fusion of a high spatial resolution panchromatic image with a multi-spectral image (also known as pansharpening) using the dual-tree complex wavelet transform (DT-CWT), an effective approach for conducting an analytic and oversampled wavelet transform to reduce aliasing, and in turn reduce shift dependence of the wavelet transform. The proposed scheme includes the definition of a model to establish how information will be extracted from the PAN band and how that information will be injected into the MS bands with low spatial resolution. The approach was applied to Spot 5 images where there are bands falling outside PAN’s spectrum. We propose an optional step in the quality evaluation protocol, which is to study the quality of the merger by regions, where each region represents a specific feature of the image. The results show that DT-CWT based approach offers good spatial quality while retaining the spectral information of original images, case SPOT 5. The additional step facilitates the identification of the most affected regions by the fusion process.
Resumo:
Accreditation models in the international context mainly consider the evaluation of learning outcomes and the ability of programs (or higher education institutions) to achieve the educational objectives stated in their mission. However, it is not clear if these objectives and therefore their outcomes satisfy real national and regional needs, a critical point in engineering master's programs, especially in developing countries. The aim of this paper is to study the importance of the local relevancy evaluation of these programs and to analyze the main models of quality assurance and accreditation bodies of USA, Europe and Latin America, in order to ascertain whether the relevancy is evaluated or not. After a literature review, we found that in a free-market economic context and international education, the accreditation of master’s programs follows an international accreditation model, and doesn´t take in account in most cases criteria and indicators for local relevancy. It concludes that it is necessary both, international accreditation to ensure the effectiveness of the program (achievement of learning outcomes) and the national accreditation through which it could ensure local relevancy of programs, for which we are giving some indicators.
Resumo:
BioMet®Tools is a set of software applications developed for the biometrical characterization of voice in different fields as voice quality evaluation in laryngology, speech therapy and rehabilitation, education of the singing voice, forensic voice analysis in court, emotional detection in voice, secure access to facilities and services, etc. Initially it was conceived as plain research code to estimate the glottal source from voice and obtain the biomechanical parameters of the vocal folds from the spectral density of the estimate. This code grew to what is now the Glottex®Engine package (G®E). Further demands from users in medical and forensic fields instantiated the development of different Graphic User Interfaces (GUI’s) to encapsulate user interaction with the G®E. This required the personalized design of different GUI’s handling the same G®E. In this way development costs and time could be saved. The development model is described in detail leading to commercial production and distribution. Study cases from its application to the field of laryngology and speech therapy are given and discussed.
Resumo:
It is known that the Minimum Weight Triangulation problem is NP-hard. Also the complexity of the Minimum Weight Pseudo-Triangulation problem is unknown, yet it is suspected to be also NP-hard. Therefore we focused on the development of approximate algorithms to find high quality triangulations and pseudo-triangulations of minimum weight. In this work we propose two metaheuristics to solve these problems: Ant Colony Optimization (ACO) and Simulated Annealing (SA). For the experimental study we have created a set of instances for MWT and MWPT problems, since no reference to benchmarks for these problems were found in the literature. Through experimental evaluation, we assess the applicability of the ACO and SA metaheuristics for MWT and MWPT problems. These results are compared with those obtained from the application of deterministic algorithms for the same problems (Delaunay Triangulation for MWT and a Greedy algorithm respectively for MWT and MWPT).
Resumo:
This paper studies the effect of different penetration rates of plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EV) in the Spanish electrical system. A stochastic model for the average trip evaluation and for the arriving and departure times is used to determine the availability of the vehicles for charging. A novel advanced charging algorithm is proposed, which avoids any communication among all agents. Its performance is determined through different charging scenarios.
Resumo:
Knowledge about the quality characteristics (QoS) of service com- positions is crucial for determining their usability and economic value. Ser- vice quality is usually regulated using Service Level Agreements (SLA). While end-to-end SLAs are well suited for request-reply interactions, more complex, decentralized, multiparticipant compositions (service choreographies) typ- ically involve multiple message exchanges between stateful parties and the corresponding SLAs thus encompass several cooperating parties with interde- pendent QoS. The usual approaches to determining QoS ranges structurally (which are by construction easily composable) are not applicable in this sce- nario. Additionally, the intervening SLAs may depend on the exchanged data. We present an approach to data-aware QoS assurance in choreographies through the automatic derivation of composable QoS models from partici- pant descriptions. Such models are based on a message typing system with size constraints and are derived using abstract interpretation. The models ob- tained have multiple uses including run-time prediction, adaptive participant selection, or design-time compliance checking. We also present an experimen- tal evaluation and discuss the benefits of the proposed approach.
Resumo:
The 12 January 2010, an earthquake hit the city of Port-au-Prince, capital of Haiti. The earthquake reached a magnitude Mw 7.0 and the epicenter was located near the town of Léogâne, approximately 25 km west of the capital. The earthquake occurred in the boundary region separating the Caribbean plate and the North American plate. This plate boundary is dominated by left-lateral strike slip motion and compression, and accommodates about 20 mm/y slip, with the Caribbean plate moving eastward with respect to the North American plate (DeMets et al., 2000). Initially the location and focal mechanism of the earthquake seemed to involve straightforward accommodation of oblique relative motion between the Caribbean and North American plates along the Enriquillo-Plantain Garden fault system (EPGFZ), however Hayes et al., (2010) combined seismological observations, geologic field data and space geodetic measurements to show that, instead, the rupture process involved slip on multiple faults. Besides, the authors showed that remaining shallow shear strain will be released in future surface-rupturing earthquakes on the EPGFZ. In December 2010, a Spanish cooperation project financed by the Politechnical University of Madrid started with a clear objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. One of the tasks of the project was devoted to vulnerability assessment of the current building stock and the estimation of seismic risk scenarios. The study was carried out by following the capacity spectrum method as implemented in the software SELENA (Molina et al., 2010). The method requires a detailed classification of the building stock in predominant building typologies (according to the materials in the structure and walls, number of stories and age of construction) and the use of the building (residential, commercial, etc.). Later, the knowledge of the soil characteristics of the city and the simulation of a scenario earthquake will provide the seismic risk scenarios (damaged buildings). The initial results of the study show that one of the highest sources of uncertainties comes from the difficulty of achieving a precise building typologies classification due to the craft construction without any regulations. Also it is observed that although the occurrence of big earthquakes usually helps to decrease the vulnerability of the cities due to the collapse of low quality buildings and the reconstruction of seismically designed buildings, in the case of Port-au-Prince the seismic risk in most of the districts remains high, showing very vulnerable areas. Therefore the local authorities have to drive their efforts towards the quality control of the new buildings, the reinforcement of the existing building stock, the establishment of seismic normatives and the development of emergency planning also through the education of the population.
Resumo:
Since Januarv 1946 a wade EC Project entitled "Mealiness in fruits Consumers perception and means for detection is being carried out. Mealiness is a sensory attribute that cannot be defined by a single parameter but through a combination of variables (multidimensional structure) Previous studies propose the definition of mealiness as the lack of crispiness of hardness and of juiciness. A destructive instrumental procedure combined with a integration technique has been already developed enabling to identify mealy fruits by destructive instrumental means use other contributions of Barreiro and Ortiz to this Ag Eng 98. Current aims .are focused on establishing non destructive tests for mealiness assessment. Magnetic resonance Imaging (MRI) makes use of the magnetic properties that some atomic nuclei have. especially hidrogen nuclei from water molecules to obtain high quality images in the field of internal quality evaluation the MRI has been used to assess internal injury due to conservation as o treatments as chilling injury un Persimmons Clark&Forbes (1994) and water-core in apples (Wang et al. 1998. In the case of persimmons the chilling injury is described as an initial tissue breakdown and lack of cohesion between cells followed by formation of a firm gel and by a lack of juiciness without changes in the total amount ol water content. Also a browning of the flesh is indicated (Clark&Forhes 1994). This definition fits into the previous description of mealiness.
Resumo:
BioMet®Phon is a software application developed for the characterization of voice in voice quality evaluation. Initially it was conceived as plain research code to estimate the glottal source from voice and obtain the biomechanical parameters of the vocal folds from the spectral density of the estimate. This code grew to what is now the Glottex®Engine package (G®E). Further demands from users in laryngology and speech therapy fields instantiated the development of a specific Graphic User Interface (GUI’s) to encapsulate user interaction with the G®E. This gave place to BioMet®Phon, an application which extracts the glottal source from voice and offers a complete parameterization of this signal, including distortion, cepstral, spectral, biomechanical, time domain, contact and tremor parameters. The semantic capabilities of biomechanical parameters are discussed. Study cases from its application to the field of laryngology and speech therapy are given and discussed. Validation results in voice pathology detection are also presented. Applications to laryngology, speech therapy, and monitoring neurological deterioration in the elder are proposed.
Resumo:
This paper describes the design and application of the Atmospheric Evaluation and Research Integrated model for Spain (AERIS). Currently, AERIS can provide concentration profiles of NO2, O3, SO2, NH3, PM, as a response to emission variations of relevant sectors in Spain. Results are calculated using transfer matrices based on an air quality modelling system (AQMS) composed by the WRF (meteorology), SMOKE (emissions) and CMAQ (atmospheric-chemical processes) models. The AERIS outputs were statistically tested against the conventional AQMS and observations, revealing a good agreement in both cases. At the moment, integrated assessment in AERIS focuses only on the link between emissions and concentrations. The quantification of deposition, impacts (health, ecosystems) and costs will be introduced in the future. In conclusion, the main asset of AERIS is its accuracy in predicting air quality outcomes for different scenarios through a simple yet robust modelling framework, avoiding complex programming and long computing times.
Resumo:
Systematic evaluation of Learning Objects is essential to make high quality Web-based education possible. For this reason, several educational repositories and e-Learning systems have developed their own evaluation models and tools. However, the differences of the context in which Learning Objects are produced and consumed suggest that no single evaluation model is sufficient for all scenarios. Besides, no much effort has been put in developing open tools to facilitate Learning Object evaluation and use the quality information for the benefit of end users. This paper presents LOEP, an open source web platform that aims to facilitate Learning Object evaluation in different scenarios and educational settings by supporting and integrating several evaluation models and quality metrics. The work exposed in this paper shows that LOEP is capable of providing Learning Object evaluation to e-Learning systems in an open, low cost, reliable and effective way. Possible scenarios where LOEP could be used to implement quality control policies and to enhance search engines are also described. Finally, we report the results of a survey conducted among reviewers that used LOEP, showing that they perceived LOEP as a powerful and easy to use tool for evaluating Learning Objects.
Resumo:
La evaluación de ontologías, incluyendo diagnóstico y reparación de las mismas, es una compleja actividad que debe llevarse a cabo en cualquier proyecto de desarrollo ontológico para comprobar la calidad técnica de las ontologías. Sin embargo, existe una gran brecha entre los enfoques metodológicos sobre la evaluación de ontologías y las herramientas que le dan soporte. En particular, no existen enfoques que proporcionen guías concretas sobre cómo diagnosticar y, en consecuencia, reparar ontologías. Esta tesis pretende avanzar en el área de la evaluación de ontologías, concretamente en la actividad de diagnóstico. Los principales objetivos de esta tesis son (a) ayudar a los desarrolladores en el diagnóstico de ontologías para encontrar errores comunes y (b) facilitar dicho diagnóstico reduciendo el esfuerzo empleado proporcionando el soporte tecnológico adecuado. Esta tesis presenta las siguientes contribuciones: • Catálogo de 41 errores comunes que los ingenieros ontológicos pueden cometer durante el desarrollo de ontologías. • Modelo de calidad para el diagnóstico de ontologías alineando el catálogo de errores comunes con modelos de calidad existentes. • Diseño e implementación de 48 métodos para detectar 33 de los 41 errores comunes en el catálogo. • Soporte tecnológico OOPS!, que permite el diagnstico de ontologías de forma (semi)automática. De acuerdo con los comentarios recibidos y los resultados de los test de satisfacción realizados, se puede afirmar que el enfoque desarrollado y presentado en esta tesis ayuda de forma efectiva a los usuarios a mejorar la calidad de sus ontologías. OOPS! ha sido ampliamente aceptado por un gran número de usuarios de formal global y ha sido utilizado alrededor de 3000 veces desde 60 países diferentes. OOPS! se ha integrado en software desarrollado por terceros y ha sido instalado en empresas para ser utilizado tanto durante el desarrollo de ontologías como en actividades de formación. Abstract Ontology evaluation, which includes ontology diagnosis and repair, is a complex activity that should be carried out in every ontology development project, because it checks for the technical quality of the ontology. However, there is an important gap between the methodological work about ontology evaluation and the tools that support such an activity. More precisely, not many approaches provide clear guidance about how to diagnose ontologies and how to repair them accordingly. This thesis aims to advance the current state of the art of ontology evaluation, specifically in the ontology diagnosis activity. The main goals of this thesis are (a) to help ontology engineers to diagnose their ontologies in order to find common pitfalls and (b) to lessen the effort required from them by providing the suitable technological support. This thesis presents the following main contributions: • A catalogue that describes 41 pitfalls that ontology developers might include in their ontologies. • A quality model for ontology diagnose that aligns the pitfall catalogue to existing quality models for semantic technologies. • The design and implementation of 48 methods for detecting 33 out of the 41 pitfalls defined in the catalogue. • A system called OOPS! (OntOlogy Pitfall Scanner!) that allows ontology engineers to (semi)automatically diagnose their ontologies. According to the feedback gathered and satisfaction tests carried out, the approach developed and presented in this thesis effectively helps users to increase the quality of their ontologies. At the time of writing this thesis, OOPS! has been broadly accepted by a high number of users worldwide and has been used around 3000 times from 60 different countries. OOPS! is integrated with third-party software and is locally installed in private enterprises being used both for ontology development activities and training courses.