860 resultados para Reliability and availability


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Smartcity Málaga project is one of Europe?s largest ecoefficient city initiatives. The project has implemented a field trial in 50 households to study the effects of energy monitoring and management technologies on the residential electricity consumption. This poster presents some lessons learned on energy consumption trends, smart clamps reliability and the suitability of power contracted by users, obtained after six months of data analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The new reactor concepts proposed in the Generation IV International Forum (GIF) are conceived to improve the use of natural resources, reduce the amount of high-level radioactive waste and excel in their reliability and safe operation. Among these novel designs sodium fast reactors (SFRs) stand out due to their technological feasibility as demonstrated in several countries during the last decades. As part of the contribution of EURATOM to GIF the CP-ESFR is a collaborative project with the objective, among others, to perform extensive analysis on safety issues involving renewed SFR demonstrator designs. The verification of computational tools able to simulate the plant behaviour under postulated accidental conditions by code-to-code comparison was identified as a key point to ensure reactor safety. In this line, several organizations employed coupled neutronic and thermal-hydraulic system codes able to simulate complex and specific phenomena involving multi-physics studies adapted to this particular fast reactor technology. In the “Introduction” of this paper the framework of this study is discussed, the second section describes the envisaged plant design and the commonly agreed upon modelling guidelines. The third section presents a comparative analysis of the calculations performed by each organisation applying their models and codes to a common agreed transient with the objective to harmonize the models as well as validating the implementation of all relevant physical phenomena in the different system codes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mosaics are high-resolution images obtained aerially and employed in several scientific research areas, such for example, in the field of environmental monitoring and precision agriculture. Although many high resolution maps are obtained by commercial demand, they can also be acquired with commercial aerial vehicles which provide more experimental autonomy and availability. For what regard to mosaicing-based aerial mission planners, there are not so many - if any - free of charge software. Therefore, in this paper is presented a framework designed with open source tools and libraries as an alternative to commercial tools to carry out mosaicing tasks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to establish rational nitrogen (N) application and reduce groundwater contamination, a clearer understanding of the N distribution through the growing season and its balance is crucial. Excessive doses of N and/or water applied to fertigated crops involve a substantial risk of aquifer contamination by nitrate; but knowledge of N cycling and availability within the soil could assist in avoiding this excess. In central Spain, the main horticultural fertigated crop is the melon type ?piel de sapo¿ and it is cultivated in vulnerable zones to nitrate pollution (Directive 91/676/CEE). However, until few years ago there were not antecedents related to the optimization of nitrogen fertilization together with irrigation. Water and N footprint are indicators that allow assessing the impact generated by different agricultural practices, so they can be used to improve the management strategies in fertigated crop systems. The water footprint distinguishes between blue water (sources of water applied to the crop, like irrigation and precipitation), green water (water used by the crop and stored in the soil), and it is furthermore possible to quantify the impact of pollution by calculating the grey water, which is defined as the volume of polluted water created from the growing and production of crops. On the other hand, the N footprint considers green N (nitrogen consumed by the crops and stored in the soil), blue N (N available for crop, like N applied with mineral and/or organic fertilizers, N applied with irrigation water and N mineralized during the crop period), whereas grey N is the amount of N-NO3- washed from the soil to the aquifer. All these components are expressed as the ratio between the components of water or N footprint and the yield (m3 t-1 or kg N t-1 respectively). The objetives of this work were to evaluate the impact derivated from the use of different fertilizer practices in a melon crop using water and N footprint.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article tests a multidimensional model of the marketing and sales organizational interface, based on a previous one tested for European companies (Homburg et al., 2008), in a specific taxonomical configuration: a brand focused professional multinational, in three successful Latin American branches. Factor reliability and hypotheses were studied through a confirmatory factor analysis. Results show the existence of a positive relationship between formalization, joint planning, teamwork, information sharing, trust and interface quality. Interface quality and business performance show also a positive relationship. This empirical study contributes to the knowledge of the organizational enhancement of interactions in emerging markets

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El presente trabajo tiene como objetivo general el análisis de las técnicas de diseño y optimización de redes topográficas, observadas mediante topografía convencional (no satelital) el desarrollo e implementación de un sistema informático capaz de ayudar a la definición de la geometría más fiable y precisa, en función de la orografía del terreno donde se tenga que ubicar. En primer lugar se realizará un estudio de la metodología del ajuste mediante mínimos cuadrados y la propagación de varianzas, para posteriormente analizar su dependencia de la geometría que adopte la red. Será imprescindible determinar la independencia de la matriz de redundancia (R) de las observaciones y su total dependencia de la geometría, así como la influencia de su diagonal principal (rii), números de redundancia, para garantizar la máxima fiabilidad interna de la misma. También se analizará el comportamiento de los números de redundancia (rii) en el diseño de una red topográfica, la variación de dichos valores en función de la geometría, analizando su independencia respecto de las observaciones así como los diferentes niveles de diseño en función de los parámetros y datos conocidos. Ha de señalarse que la optimización de la red, con arreglo a los criterios expuestos, está sujeta a los condicionantes que impone la necesidad de que los vértices sean accesibles, y además sean visibles entre sí, aquellos relacionados por observaciones, situaciones que dependen esencialmente del relieve del terreno y de los obstáculos naturales o artificiales que puedan existir. Esto implica la necesidad de incluir en el análisis y en el diseño, cuando menos de un modelo digital del terreno (MDT), aunque lo más útil sería la inclusión en el estudio del modelo digital de superficie (MDS), pero esta opción no siempre será posible. Aunque el tratamiento del diseño esté basado en un sistema bidimensional se estudiará la posibilidad de incorporar un modelo digital de superficie (MDS); esto permitirá a la hora de diseñar el emplazamiento de los vértices de la red la viabilidad de las observaciones en función de la orografía y los elementos, tanto naturales como artificiales, que sobre ella estén ubicados. Este sistema proporcionaría, en un principio, un diseño óptimo de una red constreñida, atendiendo a la fiabilidad interna y a la precisión final de sus vértices, teniendo en cuenta la orografía, lo que equivaldría a resolver un planteamiento de diseño en dos dimensiones y media1; siempre y cuando se dispusiera de un modelo digital de superficie o del terreno. Dado que la disponibilidad de obtener de manera libre el MDS de las zonas de interés del proyecto, hoy en día es costoso2, se planteará la posibilidad de conjuntar, para el estudio del diseño de la red, de un modelo digital del terreno. Las actividades a desarrollar en el trabajo de esta tesis se describen en esta memoria y se enmarcan dentro de la investigación para la que se plantean los siguientes objetivos globales: 1. Establecer un modelo matemático del proceso de observación de una red topográfica, atendiendo a todos los factores que intervienen en el mismo y a su influencia sobre las estimaciones de las incógnitas que se obtienen como resultado del ajuste de las observaciones. 2. Desarrollar un sistema que permita optimizar una red topográfica en sus resultados, aplicando técnicas de diseño y simulación sobre el modelo anterior. 3. Presentar una formulación explícita y rigurosa de los parámetros que valoran la fiabilidad de una red topográfica y de sus relaciones con el diseño de la misma. El logro de este objetivo se basa, además de en la búsqueda y revisión de las fuentes, en una intensa labor de unificación de notaciones y de construcción de pasos intermedios en los desarrollos matemáticos. 4. Elaborar una visión conjunta de la influencia del diseño de una red, en los seis siguientes factores (precisiones a posteriori, fiabilidad de las observaciones, naturaleza y viabilidad de las mismas, instrumental y metodología de estacionamiento) como criterios de optimización, con la finalidad de enmarcar el tema concreto que aquí se aborda. 5. Elaborar y programar los algoritmos necesarios para poder desarrollar una aplicación que sea capaz de contemplar las variables planteadas en el apartado anterior en el problema del diseño y simulación de redes topográficas, contemplando el modelo digital de superficie. Podrían considerarse como objetivos secundarios, los siguientes apartados: Desarrollar los algoritmos necesarios para interrelacionar el modelo digital del terreno con los propios del diseño. Implementar en la aplicación informática la posibilidad de variación, por parte del usuario, de los criterios de cobertura de los parámetros (distribución normal o t de Student), así como los grados de fiabilidad de los mismos ABSTRACT The overall purpose of this work is the analysis of the techniques of design and optimization for geodetic networks, measured with conventional survey methods (not satellite), the development and implementation of a computational system capable to help on the definition of the most liable and accurate geometry, depending on the land orography where the network has to be located. First of all, a study of the methodology by least squares adjustment and propagation of variances will be held; then, subsequently, analyze its dependency of the geometry that the network will take. It will be essential to determine the independency of redundancy matrix (R) from the observations and its absolute dependency from the network geometry, as well as the influence of the diagonal terms of the R matrix (rii), redundancy numbers, in order to ensure maximum re liability of the network. It will also be analyzed first the behavior of redundancy numbers (rii) in surveying network design, then the variation of these values depending on the geometry with the analysis of its independency from the observations, and finally the different design levels depending on parameters and known data. It should be stated that network optimization, according to exposed criteria, is subject to the accessibility of the network points. In addition, common visibility among network points, which of them are connected with observations, has to be considered. All these situations depends essentially on the terrain relief and the natural or artificial obstacles that should exist. Therefore, it is necessary to include, at least, a digital terrain model (DTM), and better a digital surface model (DSM), not always available. Although design treatment is based on a bidimensional system, the possibility of incorporating a digital surface model (DSM) will be studied; this will allow evaluating the observations feasibility based on the terrain and the elements, both natural and artificial, which are located on it, when selecting network point locations. This system would provide, at first, an optimal design of a constrained network, considering both the internal reliability and the accuracy of its points (including the relief). This approach would amount to solving a “two and a half dimensional”3 design, if a digital surface model is available. As the availability of free DSM4 of the areas of interest of the project today is expensive, the possibility of combining a digital terrain model will arise. The activities to be developed on this PhD thesis are described in this document and are part of the research for which the following overall objectives are posed: 1. To establish a mathematical model for the process of observation of a survey network, considering all the factors involved and its influence on the estimates of the unknowns that are obtained as a result of the observations adjustment. 2. To develop a system to optimize a survey network results, applying design and simulation techniques on the previous model. 3. To present an explicit and rigorous formulation of parameters which assess the reliability of a survey network and its relations with the design. The achievement of this objective is based, besides on the search and review of sources, in an intense work of unification of notation and construction of intermediate steps in the mathematical developments. 4. To develop an overview of the influence on the network design of six major factors (posterior accuracy, observations reliability, viability of observations, instruments and station methodology) as optimization criteria, in order to define the subject approached on this document. 5. To elaborate and program the algorithms needed to develop an application software capable of considering the variables proposed in the previous section, on the problem of design and simulation of surveying networks, considering the digital surface model. It could be considered as secondary objectives, the following paragraphs: To develop the necessary algorithms to interrelate the digital terrain model with the design ones. To implement in the software application the possibility of variation of the coverage criteria parameters (normal distribution or Student t test) and therefore its degree of reliability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de la Internet de las Cosas, el comercio electrónico, las redes sociales, la telefonía móvil y la computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección y privacidad de la información y su contenido, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos o las comunicaciones electrónicas. Este hecho puede verse agravado por la falta de una frontera clara que delimite el mundo personal del mundo laboral en cuanto al acceso de la información. En todos estos campos de la actividad personal y laboral, la Criptografía ha jugado un papel fundamental aportando las herramientas necesarias para garantizar la confidencialidad, integridad y disponibilidad tanto de la privacidad de los datos personales como de la información. Por otro lado, la Biometría ha propuesto y ofrecido diferentes técnicas con el fin de garantizar la autentificación de individuos a través del uso de determinadas características personales como las huellas dáctilares, el iris, la geometría de la mano, la voz, la forma de caminar, etc. Cada una de estas dos ciencias, Criptografía y Biometría, aportan soluciones a campos específicos de la protección de datos y autentificación de usuarios, que se verían enormemente potenciados si determinadas características de ambas ciencias se unieran con vistas a objetivos comunes. Por ello es imperativo intensificar la investigación en estos ámbitos combinando los algoritmos y primitivas matemáticas de la Criptografía con la Biometría para dar respuesta a la demanda creciente de nuevas soluciones más técnicas, seguras y fáciles de usar que potencien de modo simultáneo la protección de datos y la identificacíón de usuarios. En esta combinación el concepto de biometría cancelable ha supuesto una piedra angular en el proceso de autentificación e identificación de usuarios al proporcionar propiedades de revocación y cancelación a los ragos biométricos. La contribución de esta tesis se basa en el principal aspecto de la Biometría, es decir, la autentificación segura y eficiente de usuarios a través de sus rasgos biométricos, utilizando tres aproximaciones distintas: 1. Diseño de un esquema criptobiométrico borroso que implemente los principios de la biometría cancelable para identificar usuarios lidiando con los problemas acaecidos de la variabilidad intra e inter-usuarios. 2. Diseño de una nueva función hash que preserva la similitud (SPHF por sus siglas en inglés). Actualmente estas funciones se usan en el campo del análisis forense digital con el objetivo de buscar similitudes en el contenido de archivos distintos pero similares de modo que se pueda precisar hasta qué punto estos archivos pudieran ser considerados iguales. La función definida en este trabajo de investigación, además de mejorar los resultados de las principales funciones desarrolladas hasta el momento, intenta extender su uso a la comparación entre patrones de iris. 3. Desarrollando un nuevo mecanismo de comparación de patrones de iris que considera tales patrones como si fueran señales para compararlos posteriormente utilizando la transformada de Walsh-Hadarmard. Los resultados obtenidos son excelentes teniendo en cuenta los requerimientos de seguridad y privacidad mencionados anteriormente. Cada uno de los tres esquemas diseñados han sido implementados para poder realizar experimentos y probar su eficacia operativa en escenarios que simulan situaciones reales: El esquema criptobiométrico borroso y la función SPHF han sido implementados en lenguaje Java mientras que el proceso basado en la transformada de Walsh-Hadamard en Matlab. En los experimentos se ha utilizado una base de datos de imágenes de iris (CASIA) para simular una población de usuarios del sistema. En el caso particular de la función de SPHF, además se han realizado experimentos para comprobar su utilidad en el campo de análisis forense comparando archivos e imágenes con contenido similar y distinto. En este sentido, para cada uno de los esquemas se han calculado los ratios de falso negativo y falso positivo. ABSTRACT The extraordinary increase of new information technologies, the development of Internet of Things, the electronic commerce, the social networks, mobile or smart telephony and cloud computing and storage, have provided great benefits in all areas of society. Besides this fact, there are new challenges for the protection and privacy of information and its content, such as the loss of confidentiality and integrity of electronic documents and communications. This is exarcebated by the lack of a clear boundary between the personal world and the business world as their differences are becoming narrower. In both worlds, i.e the personal and the business one, Cryptography has played a key role by providing the necessary tools to ensure the confidentiality, integrity and availability both of the privacy of the personal data and information. On the other hand, Biometrics has offered and proposed different techniques with the aim to assure the authentication of individuals through their biometric traits, such as fingerprints, iris, hand geometry, voice, gait, etc. Each of these sciences, Cryptography and Biometrics, provides tools to specific problems of the data protection and user authentication, which would be widely strengthen if determined characteristics of both sciences would be combined in order to achieve common objectives. Therefore, it is imperative to intensify the research in this area by combining the basics mathematical algorithms and primitives of Cryptography with Biometrics to meet the growing demand for more secure and usability techniques which would improve the data protection and the user authentication. In this combination, the use of cancelable biometrics makes a cornerstone in the user authentication and identification process since it provides revocable or cancelation properties to the biometric traits. The contributions in this thesis involve the main aspect of Biometrics, i.e. the secure and efficient authentication of users through their biometric templates, considered from three different approaches. The first one is designing a fuzzy crypto-biometric scheme using the cancelable biometric principles to take advantage of the fuzziness of the biometric templates at the same time that it deals with the intra- and inter-user variability among users without compromising the biometric templates extracted from the legitimate users. The second one is designing a new Similarity Preserving Hash Function (SPHF), currently widely used in the Digital Forensics field to find similarities among different files to calculate their similarity level. The function designed in this research work, besides the fact of improving the results of the two main functions of this field currently in place, it tries to expand its use to the iris template comparison. Finally, the last approach of this thesis is developing a new mechanism of handling the iris templates, considering them as signals, to use the Walsh-Hadamard transform (complemented with three other algorithms) to compare them. The results obtained are excellent taking into account the security and privacy requirements mentioned previously. Every one of the three schemes designed have been implemented to test their operational efficacy in situations that simulate real scenarios: The fuzzy crypto-biometric scheme and the SPHF have been implemented in Java language, while the process based on the Walsh-Hadamard transform in Matlab. The experiments have been performed using a database of iris templates (CASIA-IrisV2) to simulate a user population. The case of the new SPHF designed is special since previous to be applied i to the Biometrics field, it has been also tested to determine its applicability in the Digital Forensic field comparing similar and dissimilar files and images. The ratios of efficiency and effectiveness regarding user authentication, i.e. False Non Match and False Match Rate, for the schemes designed have been calculated with different parameters and cases to analyse their behaviour.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El auge y penetración de las nuevas tecnologías junto con la llamada Web Social están cambiando la forma en la que accedemos a la medicina. Cada vez más pacientes y profesionales de la medicina están creando y consumiendo recursos digitales de contenido clínico a través de Internet, surgiendo el problema de cómo asegurar la fiabilidad de estos recursos. Además, un nuevo concepto está apareciendo, el de pervasive healthcare o sanidad ubicua, motivado por pacientes que demandan un acceso a los servicios sanitarios en todo momento y en todo lugar. Este nuevo escenario lleva aparejado un problema de confianza en los proveedores de servicios sanitarios. Las plataformas de eLearning se están erigiendo como paradigma de esta nueva Medicina 2.0 ya que proveen un servicio abierto a la vez que controlado/supervisado a recursos digitales, y facilitan las interacciones y consultas entre usuarios, suponiendo una buena aproximación para esta sanidad ubicua. En estos entornos los problemas de fiabilidad y confianza pueden ser solventados mediante la implementación de mecanismos de recomendación de recursos y personas de manera confiable. Tradicionalmente las plataformas de eLearning ya cuentan con mecanismos de recomendación, si bien están más enfocados a la recomendación de recursos. Para la recomendación de usuarios es necesario acudir a mecanismos más elaborados como son los sistemas de confianza y reputación (trust and reputation) En ambos casos, tanto la recomendación de recursos como el cálculo de la reputación de los usuarios se realiza teniendo en cuenta criterios principalmente subjetivos como son las opiniones de los usuarios. En esta tesis doctoral proponemos un nuevo modelo de confianza y reputación que combina evaluaciones automáticas de los recursos digitales en una plataforma de eLearning, con las opiniones vertidas por los usuarios como resultado de las interacciones con otros usuarios o después de consumir un recurso. El enfoque seguido presenta la novedad de la combinación de una parte objetiva con otra subjetiva, persiguiendo mitigar el efecto de posibles castigos subjetivos por parte de usuarios malintencionados, a la vez que enriquecer las evaluaciones objetivas con información adicional acerca de la capacidad pedagógica del recurso o de la persona. El resultado son recomendaciones siempre adaptadas a los requisitos de los usuarios, y de la máxima calidad tanto técnica como educativa. Esta nueva aproximación requiere una nueva herramienta para su validación in-silico, al no existir ninguna aplicación que permita la simulación de plataformas de eLearning con mecanismos de recomendación de recursos y personas, donde además los recursos sean evaluados objetivamente. Este trabajo de investigación propone pues una nueva herramienta, basada en el paradigma de programación orientada a agentes inteligentes para el modelado de comportamientos complejos de usuarios en plataformas de eLearning. Además, la herramienta permite también la simulación del funcionamiento de este tipo de entornos dedicados al intercambio de conocimiento. La evaluación del trabajo propuesto en este documento de tesis se ha realizado de manera iterativa a lo largo de diferentes escenarios en los que se ha situado al sistema frente a una amplia gama de comportamientos de usuarios. Se ha comparado el rendimiento del modelo de confianza y reputación propuesto frente a dos modos de recomendación tradicionales: a) utilizando sólo las opiniones subjetivas de los usuarios para el cálculo de la reputación y por extensión la recomendación; y b) teniendo en cuenta sólo la calidad objetiva del recurso sin hacer ningún cálculo de reputación. Los resultados obtenidos nos permiten afirmar que el modelo desarrollado mejora la recomendación ofrecida por las aproximaciones tradicionales, mostrando una mayor flexibilidad y capacidad de adaptación a diferentes situaciones. Además, el modelo propuesto es capaz de asegurar la recomendación de nuevos usuarios entrando al sistema frente a la nula recomendación para estos usuarios presentada por el modo de recomendación predominante en otras plataformas que basan la recomendación sólo en las opiniones de otros usuarios. Por último, el paradigma de agentes inteligentes ha probado su valía a la hora de modelar plataformas virtuales complejas orientadas al intercambio de conocimiento, especialmente a la hora de modelar y simular el comportamiento de los usuarios de estos entornos. La herramienta de simulación desarrollada ha permitido la evaluación del modelo de confianza y reputación propuesto en esta tesis en una amplia gama de situaciones diferentes. ABSTRACT Internet is changing everything, and this revolution is especially present in traditionally offline spaces such as medicine. In recent years health consumers and health service providers are actively creating and consuming Web contents stimulated by the emergence of the Social Web. Reliability stands out as the main concern when accessing the overwhelming amount of information available online. Along with this new way of accessing the medicine, new concepts like ubiquitous or pervasive healthcare are appearing. Trustworthiness assessment is gaining relevance: open health provisioning systems require mechanisms that help evaluating individuals’ reputation in pursuit of introducing safety to these open and dynamic environments. Technical Enhanced Learning (TEL) -commonly known as eLearning- platforms arise as a paradigm of this Medicine 2.0. They provide an open while controlled/supervised access to resources generated and shared by users, enhancing what it is being called informal learning. TEL systems also facilitate direct interactions amongst users for consultation, resulting in a good approach to ubiquitous healthcare. The aforementioned reliability and trustworthiness problems can be faced by the implementation of mechanisms for the trusted recommendation of both resources and healthcare services providers. Traditionally, eLearning platforms already integrate recommendation mechanisms, although this recommendations are basically focused on providing an ordered classifications of resources. For users’ recommendation, the implementation of trust and reputation systems appears as the best solution. Nevertheless, both approaches base the recommendation on the information from the subjective opinions of other users of the platform regarding the resources or the users. In this PhD work a novel approach is presented for the recommendation of both resources and users within open environments focused on knowledge exchange, as it is the case of TEL systems for ubiquitous healthcare. The proposed solution adds the objective evaluation of the resources to the traditional subjective personal opinions to estimate the reputation of the resources and of the users of the system. This combined measure, along with the reliability of that calculation, is used to provide trusted recommendations. The integration of opinions and evaluations, subjective and objective, allows the model to defend itself against misbehaviours. Furthermore, it also allows ‘colouring’ cold evaluation values by providing additional quality information such as the educational capacities of a digital resource in an eLearning system. As a result, the recommendations are always adapted to user requirements, and of the maximum technical and educational quality. To our knowledge, the combination of objective assessments and subjective opinions to provide recommendation has not been considered before in the literature. Therefore, for the evaluation of the trust and reputation model defined in this PhD thesis, a new simulation tool will be developed following the agent-oriented programming paradigm. The multi-agent approach allows an easy modelling of independent and proactive behaviours for the simulation of users of the system, conforming a faithful resemblance of real users of TEL platforms. For the evaluation of the proposed work, an iterative approach have been followed, testing the performance of the trust and reputation model while providing recommendation in a varied range of scenarios. A comparison with two traditional recommendation mechanisms was performed: a) using only users’ past opinions about a resource and/or other users; and b) not using any reputation assessment and providing the recommendation considering directly the objective quality of the resources. The results show that the developed model improves traditional approaches at providing recommendations in Technology Enhanced Learning (TEL) platforms, presenting a higher adaptability to different situations, whereas traditional approaches only have good results under favourable conditions. Furthermore the promotion period mechanism implemented successfully helps new users in the system to be recommended for direct interactions as well as the resources created by them. On the contrary OnlyOpinions fails completely and new users are never recommended, while traditional approaches only work partially. Finally, the agent-oriented programming (AOP) paradigm has proven its validity at modelling users’ behaviours in TEL platforms. Intelligent software agents’ characteristics matched the main requirements of the simulation tool. The proactivity, sociability and adaptability of the developed agents allowed reproducing real users’ actions and attitudes through the diverse situations defined in the evaluation framework. The result were independent users, accessing to different resources and communicating amongst them to fulfil their needs, basing these interactions on the recommendations provided by the reputation engine.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este trabalho traduz um estudo de casos, que procura pesquisar e sistematizar o conhecimento sobre a gestão do estado dos equipamentos para atender às suas funções produtivas, isto é, uma gestão orientada para resultados. A função manutenção tem sido considerada como aquela que agrega conhecimentos e atividades para assegurar a disponibilidade operacional dos sistemas produtivos, dentro de padrões de desempenho antecipadamente especificados, ao menor custo possível e atendendo a requisitos pertinentes de segurança. Nesse contexto, pretende-se caracterizar de que modo abordagens ou políticas, adotadas por empresas selecionadas, que requerem o exercício da função manutenção, se vinculam a indicadores de disponibilidade. Ao longo do trabalho, a disponibilidade é tratada como um \"indicador de resultado\" das atividades de manutenção. Transparece da literatura que a análise da tendência desse indicador deve orientar as tomadas de decisão referentes às ações sobre o equipamento. Desse modo, compete verificar se, e como, é feito o vínculo entre esse indicador e decisões relativas à manutenção, sem perder de vista a produtividade da empresa. Assim, descreve-se o contexto em que se inserem a manutenção e a disponibilidade e são identificados elementos que, interferindo na disponibilidade, permitem desdobrá-la e associá-la a uma estrutura analítica que auxilie o levantamento e encaminhamento de análises sobre os dados obtidos no campo, para melhor percepção do tratamento dado à disponibilidade. Para possibilitar o encaminhamento do estudo de casos e a definição das proposições de pesquisa, a função manutenção é associada a um processo, em que se usa o indicador de disponibilidade como feedback, para a otimização do próprio processo de operação da função manutenção, bem como da especificação dos recursos de entrada desse processo. Três proposições são estruturadas e verificadas em quatro empresas que tipificam dois grupos distintos de operações, focalizando o tratamento dado à disponibilidade, e, no seu âmbito, o tratamento de compromissos de gestão e do estudo do ciclo de vida do equipamento. Para possibilitar uma avaliação objetiva das variáveis das proposições da pesquisa, que são de natureza essencialmente qualitativa, encontrou-se, no Capability Maturity Model (CMM), um modelo conceitual que, por suas características evolutivas, forneceu inspiração para estruturar o necessário instrumento de avaliação. A conclusão da pesquisa revela que as proposições de estudo não se confirmaram de forma plena, apontando sensível diferença de seu atendimento quando se faz a comparação entre os dois grupos de empresas, deixando, assim, um espaço em aberto para novas pesquisas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The harmonization of European health systems brings with it a need for tools to allow the standardized collection of information about medical care. A common coding system and standards for the description of services are needed to allow local data to be incorporated into evidence-informed policy, and to permit equity and mobility to be assessed. The aim of this project has been to design such a classification and a related tool for the coding of services for Long Term Care (DESDE-LTC), based on the European Service Mapping Schedule (ESMS). Methods: The development of DESDE-LTC followed an iterative process using nominal groups in 6 European countries. 54 researchers and stakeholders in health and social services contributed to this process. In order to classify services, we use the minimal organization unit or “Basic Stable Input of Care” (BSIC), coded by its principal function or “Main Type of Care” (MTC). The evaluation of the tool included an analysis of feasibility, consistency, ontology, inter-rater reliability, Boolean Factor Analysis, and a preliminary impact analysis (screening, scoping and appraisal). Results: DESDE-LTC includes an alpha-numerical coding system, a glossary and an assessment instrument for mapping and counting LTC. It shows high feasibility, consistency, inter-rater reliability and face, content and construct validity. DESDE-LTC is ontologically consistent. It is regarded by experts as useful and relevant for evidence-informed decision making. Conclusion: DESDE-LTC contributes to establishing a common terminology, taxonomy and coding of LTC services in a European context, and a standard procedure for data collection and international comparison.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new interview procedure is proposed for collecting valid information on the acquisition of high-level performance in sport. The procedure elicits verifiable information on the development of athletes' achievements in their primary sport, as well as factors that might influence performance, including involvement in other sporting activities, injuries, physical growth and quality of training resources. Interviewed athletes also describe their engagement in specific training and other relevant activities during each year of their development as well as how they experienced each type of activity. The collected information is then examined to identify those aspects of the athletes' recall of their development that meet criteria of reliability and validity. Recommendations to coaches and scientists are discussed for how retrospective interviews can uncover aspects of development that distinguish elite from less accomplished athletes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: Current physical activity levels among children and youth are alarmingly low; a mere 7% of children and youth are meeting the Canadian Physical Activity Guidelines (Colley et al., 2011), which means that the vast majority of this population is at risk of developing major health problems in adulthood (Janssen & Leblanc, 2010). These high inactivity rates may be related to suboptimal experiences in sport and physical activity stemming from a lack of competence and confidence (Lubans, Morgan, Cliff, Barnett, & Okely, 2010). Developing a foundation of physical literacy can encourage and maintain lifelong physical activity, yet this does not always occur naturally as a part of human growth (Hardman, 2011). An ideal setting to foster the growth and development of physical literacy is physical education class. Physical education class can offer all children and youth an equal opportunity to learn and practice the skills needed to be active for life (Hardman, 2011). Elementary school teachers are responsible for delivering the physical education curriculum, and it is important to understand their will and capacity as the implementing agents of physical literacy development curriculum (McLaughlin, 1987). Purpose: The purpose of this study was to explore the physical literacy component of the 2015 Ontario Health and Physical Education curriculum policy through the eyes of key informants, and to explore the resources available for the implementation of this new policy. Methods: Qualitative interviews were conducted with seven key informants of the curriculum policy development, including two teachers. In tandem with the interviews, a resource inventory and curriculum review were conducted to assess the content and availability of physical literacy resources. All data were analyzed through the lens of Hogwood and Gunn’s (1984) 10 preconditions for policy implementation. Results: Participants discussed how implementation is affected by: accountability, external capacity, internal capacity, awareness and understanding of physical literacy, implementation expertise, and policy climate. Discussion: Participants voiced similar opinions on most issues, and the overall lack of attention given to physical education programs in schools will continue to be a major dilemma when trying to combat such high physical inactivity levels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

J. M. Coetzee's Foe is not only a post-colonial novel, but it is also a re-writing of a classic, and its main themes are language, authorship, power and identity. Moreover, Foe is narrated by a woman, while written by a male, Nobel prize winning South African author. The aim of my tesina is to focus on the question of authorship and the role of language in Foe. Without any claim to be exhaustive, in the first section I will examine some selected extracts of Coetzee's book, in order to provide an analysis of the novel. These quotations will mainly be its metalinguistic parts and will be analysed in the “theory” sections of my work, relying on literary theory and on previous works on the novel. Among others, I will cover themes such as the relationship between speech and writing, the connection between writing, history, and memory, the role of silence and alternative ways of communicating and the relationship between literary authority and truth. These arguments will be the foundation for my second section, in which I will attempt to shed a light on the importance of the novel from a linguistic point of view, but always keeping an eye on the implication that this has on authorship. While it is true that it is less politically-permeated than Coetzee's previous works, Foe is above all a “journey of discovery” in the world of language and authorship. In fact, it becomes a warning for any person immersed in the ocean of language since, while everyone naturally tends to trust speech and writing as the only medium through which one can get closer to the truth, authority never is a synonym of reliability, and language is a system of communication behind which structures of power, misconceptions, lies, and treacherous tides easily hide.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The mass-accumulation rate and grain size of the total eolian component of North Pacific pelagic clays at Deep Sea Drilling Project Sites 576 and 578 have been used to evaluate changes in eolian sedimentation and the intensity of atmospheric circulation that have occurred during the past 70 m.y. Eolian deposition, an indicator of source area aridity, was low in the Paleocene, Eocene, and Oligocene, apparently reflecting the humid environments of that time as well as the lack of glacial erosion products. A general increase in eoiian accumulation in the Miocene apparently reflects the relative increase in global aridity during the latter part of the Cenozoic. A dramatic increase in eolian accumulation rates in the Pliocene reflects the increased aridity and availability of glacial erosion products associated with Northern Hemisphere glaciation 2.5 m.y. ago. Eolian grain size, an indicator of wind intensity, suggests that Late Cretaceous wind strength was comparable to present-day wind strength. A sharp decrease in eolian grain size across the Paleocene/Eocene boundary is not readily interpreted, but may indicate a significant reduction in the intensity of atmospheric circulation at that time. Fine eolian grain size and low accumulation rates in the Eocene and early Oligocene are in agreement with low early Tertiary thermal gradients and less vigorous atmospheric circulation. Large increases in grain size during the Oligocene, mid-to-late Miocene, and Pliocene appear to be a response to steepening thermal gradients resulting from increasing polar isolation.