976 resultados para Information Requirements: Data Availability


Relevância:

100.00% 100.00%

Publicador:

Resumo:

For many learning tasks the duration of the data collection can be greater than the time scale for changes of the underlying data distribution. The question we ask is how to include the information that data are aging. Ad hoc methods to achieve this include the use of validity windows that prevent the learning machine from making inferences based on old data. This introduces the problem of how to define the size of validity windows. In this brief, a new adaptive Bayesian inspired algorithm is presented for learning drifting concepts. It uses the analogy of validity windows in an adaptive Bayesian way to incorporate changes in the data distribution over time. We apply a theoretical approach based on information geometry to the classification problem and measure its performance in simulations. The uncertainty about the appropriate size of the memory windows is dealt with in a Bayesian manner by integrating over the distribution of the adaptive window size. Thus, the posterior distribution of the weights may develop algebraic tails. The learning algorithm results from tracking the mean and variance of the posterior distribution of the weights. It was found that the algebraic tails of this posterior distribution give the learning algorithm the ability to cope with an evolving environment by permitting the escape from local traps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vehicle activated signs (VAS) display a warning message when drivers exceed a particular threshold. VAS are often installed on local roads to display a warning message depending on the speed of the approaching vehicles. VAS are usually powered by electricity; however, battery and solar powered VAS are also commonplace. This thesis investigated devel-opment of an automatic trigger speed of vehicle activated signs in order to influence driver behaviour, the effect of which has been measured in terms of reduced mean speed and low standard deviation. A comprehen-sive understanding of the effectiveness of the trigger speed of the VAS on driver behaviour was established by systematically collecting data. Specif-ically, data on time of day, speed, length and direction of the vehicle have been collected for the purpose, using Doppler radar installed at the road. A data driven calibration method for the radar used in the experiment has also been developed and evaluated. Results indicate that trigger speed of the VAS had variable effect on driv-ers’ speed at different sites and at different times of the day. It is evident that the optimal trigger speed should be set near the 85th percentile speed, to be able to lower the standard deviation. In the case of battery and solar powered VAS, trigger speeds between the 50th and 85th per-centile offered the best compromise between safety and power consump-tion. Results also indicate that different classes of vehicles report differ-ences in mean speed and standard deviation; on a highway, the mean speed of cars differs slightly from the mean speed of trucks, whereas a significant difference was observed between the classes of vehicles on lo-cal roads. A differential trigger speed was therefore investigated for the sake of completion. A data driven approach using Random forest was found to be appropriate in predicting trigger speeds respective to types of vehicles and traffic conditions. The fact that the predicted trigger speed was found to be consistently around the 85th percentile speed justifies the choice of the automatic model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solar-powered vehicle activated signs (VAS) are speed warning signs powered by batteries that are recharged by solar panels. These signs are more desirable than other active warning signs due to the low cost of installation and the minimal maintenance requirements. However, one problem that can affect a solar-powered VAS is the limited power capacity available to keep the sign operational. In order to be able to operate the sign more efficiently, it is proposed that the sign be appropriately triggered by taking into account the prevalent conditions. Triggering the sign depends on many factors such as the prevailing speed limit, road geometry, traffic behaviour, the weather and the number of hours of daylight. The main goal of this paper is therefore to develop an intelligent algorithm that would help optimize the trigger point to achieve the best compromise between speed reduction and power consumption. Data have been systematically collected whereby vehicle speed data were gathered whilst varying the value of the trigger speed threshold. A two stage algorithm is then utilized to extract the trigger speed value. Initially the algorithm employs a Self-Organising Map (SOM), to effectively visualize and explore the properties of the data that is then clustered in the second stage using K-means clustering method. Preliminary results achieved in the study indicate that using a SOM in conjunction with K-means method is found to perform well as opposed to direct clustering of the data by K-means alone. Using a SOM in the current case helped the algorithm determine the number of clusters in the data set, which is a frequent problem in data clustering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops background considerations to help better framing the results of a CGE exercise. Three main criticisms are usually addressed to CGE efforts. First, they are too aggregate, their conclusions failing to shed light on relevant sectors or issues. Second, they imply huge data requirements. Timeliness is frequently jeopardised by out-dated sources, benchmarks referring to realities gone by. Finally, results are meaningless, as they answer wrong or ill-posed questions. Modelling demands end up by creating a rather artificial context, where the original questions lose content. In spite of a positive outlook on the first two, crucial questions lie in the third point. After elaborating such questions, and trying to answer some, the text argues that CGE models can come closer to reality. If their use is still scarce to give way to a fruitful symbiosis between negotiations and simulation results, they remain the only available technique providing a global, inter-related way of capturing economy-wide effects of several different policies. International organisations can play a major role supporting and encouraging improvements. They are also uniquely positioned to enhance information and data sharing, as well as putting people from various origins together, to share their experiences. A serious and complex homework is however required, to correct, at least, the most dangerous present shortcomings of the technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A atenção à saúde da população no Brasil gera um grande volume de dados sobre os serviços de saúde prestados. O tratamento adequado destes dados com técnicas de acesso à grande massa de dados pode permitir a extração de informações importantes para um melhor conhecimento do setor saúde. Avaliar o desempenho dos sistemas de saúde através da utilização da massa de dados produzida tem sido uma tendência mundial, uma vez que vários países já mantêm programas de avaliação baseados em dados e indicadores. Neste contexto, A OCDE – Organização para Cooperação e Desenvolvimento Econômico, que é uma organização internacional que avalia as políticas econômicas de seus 34 países membros, possui uma publicação bienal, chamada Health at a Glance, que tem por objetivo fazer a comparação dos sistemas de saúde dos países membros da OCDE. Embora o Brasil não seja um membro, a OCDE procura incluí-lo no cálculo de alguns indicadores, quando os dados estão disponíveis, pois considera o Brasil como uma das maiores economias que não é um país membro. O presente estudo tem por objetivo propor e implementar, com base na metodologia da publicação Health at a Glance de 2015, o cálculo para o Brasil de 22 indicadores em saúde que compõem o domínio “utilização de serviços em saúde” da publicação da OCDE. Para isto foi feito um levantamento das principais bases de dados nacionais em saúde disponíveis que posteriormente foram capturadas, conforme necessidade, através de técnicas para acessar e tratar o grande volume de dados em saúde no Brasil. As bases de dados utilizadas são provenientes de três principais fontes remuneração: SUS, planos privados de saúde e outras fontes de remuneração como, por exemplo, planos públicos de saúde, DPVAT e particular. A realização deste trabalho permitiu verificar que os dados em saúde disponíveis publicamente no Brasil podem ser usados na avaliação do desempenho do sistema de saúde, e além de incluir o Brasil no benchmark internacional dos países da OCDE nestes 22 indicadores, promoveu a comparação destes indicadores entre o setor público de saúde do Brasil, o SUS, e o setor de planos privados de saúde, a chamada saúde suplementar. Além disso, também foi possível comparar os indicadores calculados para o SUS para cada UF, demonstrando assim as diferenças na prestação de serviços de saúde nos estados do Brasil para o setor público. A análise dos resultados demonstrou que, em geral, o Brasil comparado com os países da OCDE apresenta um desempenho abaixo da média dos demais países, o que indica necessidade de esforços para atingir um nível mais alto na prestação de serviços em saúde que estão no âmbito de avaliação dos indicadores calculados. Quando segmentado entre SUS e saúde suplementar, a análise dos resultados dos indicadores do Brasil aponta para uma aproximação do desempenho do setor de saúde suplementar em relação à média dos demais países da OCDE, e por outro lado um distanciamento do SUS em relação a esta média. Isto evidencia a diferença no nível de prestação de serviços dentro do Brasil entre o SUS e a saúde suplementar. Por fim, como proposta de melhoria na qualidade dos resultados obtidos neste estudo sugere-se o uso da base de dados do TISS/ANS para as informações provenientes do setor de saúde suplementar, uma vez que o TISS reflete toda a troca de informações entre os prestadores de serviços de saúde e as operadoras de planos privados de saúde para fins de pagamento dos serviços prestados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper aims to present the use of a learning object (CADILAG), developed to facilitate understanding data structure operations by using visual presentations and animations. The CADILAG allows visualizing the behavior of algorithms usually discussed during Computer Science and Information System courses. For each data structure it is possible visualizing its content and its operation dynamically. Its use was evaluated an the results are presented. © 2012 AISTI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Geociências e Meio Ambiente - IGCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Enfermagem (mestrado profissional) - FMB

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As tecnologias wireless vêm evoluindo de forma rápida nas últimas décadas, pois são uma eficiente alternativa para transmissão de informações, sejam dados, voz, vídeos e demais serviços de rede. O conhecimento do processo de propagação dessas informações em diferentes ambientes é um fator de grande importância para o planejamento e o desenvolvimento de sistemas de comunicações sem fio. Devido ao rápido avanço e popularização dessas redes, os serviços oferecidos tornaram-se mais complexos e com isso, necessitam de requisitos de qualidades para que sejam ofertados ao usuário final de forma satisfatória. Devido a isso, torna-se necessário aos projetistas desses sistemas, uma metodologia que ofereça uma melhor avaliação do ambiente indoor. Essa avaliação é feita através da análise da área de cobertura e do comportamento das métricas de serviços multimídia em qualquer posição do ambiente que está recebendo o serviço. O trabalho desenvolvido nessa dissertação objetiva avaliar uma metodologia para a predição de métricas de qualidade de experiência. Para isso, foram realizadas campanhas de medições de transmissões de vídeo em uma rede sem fio e foram avaliados alguns parâmetros da rede (jitter de pacotes/frames, perda de pacotes/frames) e alguns parâmetros de qualidade de experiência (PSNR, SSIM e VQM). Os resultados apresentaram boa concordância com os modelos da literatura e com as medições.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term “user study” focuses on information use patterns, information needs, and information-seeking behaviour. Information- seeking behaviour and information access patterns are areas of active interest among librarians and information scientists. This article reports on a study of the information requirements, usefulness of library resources and services, and problems encountered by faculty members of two arts and science colleges, Government Arts & Science College and Sri Raghavendra Arts & Science College, Chidambaram.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern control systems are becoming more and more complex and control algorithms more and more sophisticated. Consequently, Fault Detection and Diagnosis (FDD) and Fault Tolerant Control (FTC) have gained central importance over the past decades, due to the increasing requirements of availability, cost efficiency, reliability and operating safety. This thesis deals with the FDD and FTC problems in a spacecraft Attitude Determination and Control System (ADCS). Firstly, the detailed nonlinear models of the spacecraft attitude dynamics and kinematics are described, along with the dynamic models of the actuators and main external disturbance sources. The considered ADCS is composed of an array of four redundant reaction wheels. A set of sensors provides satellite angular velocity, attitude and flywheel spin rate information. Then, general overviews of the Fault Detection and Isolation (FDI), Fault Estimation (FE) and Fault Tolerant Control (FTC) problems are presented, and the design and implementation of a novel diagnosis system is described. The system consists of a FDI module composed of properly organized model-based residual filters, exploiting the available input and output information for the detection and localization of an occurred fault. A proper fault mapping procedure and the nonlinear geometric approach are exploited to design residual filters explicitly decoupled from the external aerodynamic disturbance and sensitive to specific sets of faults. The subsequent use of suitable adaptive FE algorithms, based on the exploitation of radial basis function neural networks, allows to obtain accurate fault estimations. Finally, this estimation is actively exploited in a FTC scheme to achieve a suitable fault accommodation and guarantee the desired control performances. A standard sliding mode controller is implemented for attitude stabilization and control. Several simulation results are given to highlight the performances of the overall designed system in case of different types of faults affecting the ADCS actuators and sensors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation: Array CGH technologies enable the simultaneous measurement of DNA copy number for thousands of sites on a genome. We developed the circular binary segmentation (CBS) algorithm to divide the genome into regions of equal copy number (Olshen {\it et~al}, 2004). The algorithm tests for change-points using a maximal $t$-statistic with a permutation reference distribution to obtain the corresponding $p$-value. The number of computations required for the maximal test statistic is $O(N^2),$ where $N$ is the number of markers. This makes the full permutation approach computationally prohibitive for the newer arrays that contain tens of thousands markers and highlights the need for a faster. algorithm. Results: We present a hybrid approach to obtain the $p$-value of the test statistic in linear time. We also introduce a rule for stopping early when there is strong evidence for the presence of a change. We show through simulations that the hybrid approach provides a substantial gain in speed with only a negligible loss in accuracy and that the stopping rule further increases speed. We also present the analysis of array CGH data from a breast cancer cell line to show the impact of the new approaches on the analysis of real data. Availability: An R (R Development Core Team, 2006) version of the CBS algorithm has been implemented in the ``DNAcopy'' package of the Bioconductor project (Gentleman {\it et~al}, 2004). The proposed hybrid method for the $p$-value is available in version 1.2.1 or higher and the stopping rule for declaring a change early is available in version 1.5.1 or higher.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in information technology and global data availability have opened the door for assessments of sustainable development at a truly macro scale. It is now fairly easy to conduct a study of sustainability using the entire planet as the unit of analysis; this is precisely what this work set out to accomplish. The study began by examining some of the best known composite indicator frameworks developed to measure sustainability at the country level today. Most of these were found to value human development factors and a clean local environment, but to gravely overlook consumption of (remote) resources in relation to nature’s capacity to renew them, a basic requirement for a sustainable state. Thus, a new measuring standard is proposed, based on the Global Sustainability Quadrant approach. In a two‐dimensional plot of nations’ Human Development Index (HDI) vs. their Ecological Footprint (EF) per capita, the Sustainability Quadrant is defined by the area where both dimensions satisfy the minimum conditions of sustainable development: an HDI score above 0.8 (considered ‘high’ human development), and an EF below the fair Earth‐share of 2.063 global hectares per person. After developing methods to identify those countries that are closest to the Quadrant in the present‐day and, most importantly, those that are moving towards it over time, the study tackled the question: what indicators of performance set these countries apart? To answer this, an analysis of raw data, covering a wide array of environmental, social, economic, and governance performance metrics, was undertaken. The analysis used country rank lists for each individual metric and compared them, using the Pearson Product Moment Correlation function, to the rank lists generated by the proximity/movement relative to the Quadrant measuring methods. The analysis yielded a list of metrics which are, with a high degree of statistical significance, associated with proximity to – and movement towards – the Quadrant; most notably: Favorable for sustainable development: use of contraception, high life expectancy, high literacy rate, and urbanization. Unfavorable for sustainable development: high GDP per capita, high language diversity, high energy consumption, and high meat consumption. A momentary gain, but a burden in the long‐run: high carbon footprint and debt. These results could serve as a solid stepping stone for the development of more reliable composite index frameworks for assessing countries’ sustainability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent liberalization of the German energy market has forced the energy industry to develop and install new information systems to support agents on the energy trading floors in their analytical tasks. Besides classical approaches of building a data warehouse giving insight into the time series to understand market and pricing mechanisms, it is crucial to provide a variety of external data from the web. Weather information as well as political news or market rumors are relevant to give the appropriate interpretation to the variables of a volatile energy market. Starting from a multidimensional data model and a collection of buy and sell transactions a data warehouse is built that gives analytical support to the agents. Following the idea of web farming we harvest the web, match the external information sources after a filtering and evaluation process to the data warehouse objects, and present this qualified information on a user interface where market values are correlated with those external sources over the time axis.