979 resultados para Information Acquisition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sticky information monetary models have been used in the macroeconomic literature to explain some of the observed features regarding inflation dynamics. In this paper, we explore the consequences of relaxing the rational expectations assumption usually taken in this type of model; in particular, by considering expectations formed through adaptive learning, it is possible to arrive to results other than the trivial convergence to a fixed point long-term equilibrium. The results involve the possibility of endogenous cyclical motion (periodic and a-periodic), which emerges essentially in scenarios of hyperinflation. In low inflation settings, the introduction of learning implies a less severe impact of monetary shocks that, nevertheless, tend to last for additional time periods relative to the pure perfect foresight setup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amorphous glass/ZnO-Al/p(a-Si:H)/i(a-Si:H)/n(a-Si1-xCx:H)/Al imagers with different n-layer resistivities were produced by plasma enhanced chemical vapour deposition technique (PE-CVD). An image is projected onto the sensing element and leads to spatially confined depletion regions that can be readout by scanning the photodiode with a low-power modulated laser beam. The essence of the scheme is the analog readout, and the absence of semiconductor arrays or electrode potential manipulations to transfer the information coming from the transducer. The influence of the intensity of the optical image projected onto the sensor surface is correlated with the sensor output characteristics (sensitivity, linearity blooming, resolution and signal-to-noise ratio) are analysed for different material compositions (0.5 < x < 1). The results show that the responsivity and the spatial resolution are limited by the conductivity of the doped layers. An enhancement of one order of magnitude in the image intensity signal and on the spatial resolution are achieved at 0.2 mW cm(-2) light flux by decreasing the n-layer conductivity by the same amount. A physical model supported by electrical simulation gives insight into the image-sensing technique used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preliminary version

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to establish some basic guidelines to help draft the information letter sent to individual contributors should it be decided to use this model in the Spanish public pension system. With this end in mind and basing our work on the experiences of the most advanced countries in the field and the pioneering papers by Jackson (2005), Larsson et al. (2008) and Sunden (2009), we look into the concept of “individual pension information” and identify its most relevant characteristics. We then give a detailed description of two models, those in the United States and Sweden, and in particular look at how they are structured, what aspects could be improved and what their limitations are. Finally we make some recommendations of special interest for designing the model for Spain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lidar com as problemáticas da saúde, implica um domínio dos processos cognitivos (raciocínio, resolução de problemas e tomada de decisão) e de desempenhos práticos, o que obriga a afectação de um conjunto de atitudes e comportamentos específicos. Este estudo, implementou e avaliou o impacto de experiências pedagógicas desenvolvidas com os estudantes da unidade curricular Radiologia do Sistema Nervoso (RSN) da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL). Aplicaram-se metodologias de ensino mistas (presenciais e virtuais) utilizadas na leccionação teórica e prática no ano curricular 2008/2009. Para a avaliação do perfil de aprendizagem dos estudantes foi aplicado o método de Honey & Munford e para a avaliação e monitorização dos conhecimentos aplicaram-se check list baseadas nos conteúdos programáticos. A monitorização das ferramentas da plataforma moodle complementaram a restante informação. Verificou-se uma progressão de aprendizagem positiva para um grupo de estudantes maioritariamente do estilo reflexivo (média=10,6 estudantes). As conclusões apontaram para um impacto positivo quanto à aplicação das metodologias híbridas com maior índice de sucesso para a metodologia assíncrona. Verificou-se também mais flexibilidade no acesso aos conteúdos porém com algumas limitações tais como residência inicial por parte dos estudantes, maior carga de trabalho para os docentes, falta de terminais para acesso à plataforma e pouca experiência de todos os envolvidos no domínio e manipulação da plataforma. ABSTRACT - This study focused on the role of cognitive processes (reasoning, problem solving and decision making) and performance practice in the formation of attitudes and behaviours relating to health issues. It was conducted to evaluate the effects of pedagogical experiences on students who participated in the course in radiography in the Nervous System Imaging Unit (RSN) of the Lisbon Health School of Technology. Mixed (face-to-face and virtual) teaching methodologies were used in theory and practice sessions. Honey and Munford’s method was used to evaluate the learning profile of students. To monitor and evaluate students’ knowledge acquisition, check lists based on program topics were applied. Other information was supplied through the learning platform of Moodle. The student group with mostly a reflective learning style increased their knowledge. The asynchronous method was shown to produce a higher success rate and more flexibility in accessing content but also registered some limitations such as resistance by students, increased workload for teachers, lack of access to the platform and inexperience of all involved in handling the platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

7th Mediterranean Conference on Information Systems, MCIS 2012, Guimaraes, Portugal, September 8-10, 2012, Proceedings Series: Lecture Notes in Business Information Processing, Vol. 129

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução – A medição da pressão arterial (PA), utilizando dispositivos automáticos, é frequentemente realizada na prática clínica e na automedição, permitindo adquirir informação fiável para o diagnóstico, controlo e tratamento da hipertensão arterial. Porém, muitos dos dispositivos automáticos disponíveis no mercado não estão validados segundo protocolos existentes para o efeito. O objetivo do estudo foi confirmar a validação do dispositivo de medição automática da PA, OMRON® M6 Comfort, segundo o Protocolo Internacional da European Society of Hypertension (ESH), de 2010, para a validação de dispositivos de medição automática da PA em adultos. Metodologia – Foram estudados 33 indivíduos, aos quais foram realizadas 9 medições sequenciais da PA, no braço esquerdo, com um esfignomanómetro aneróide alternando com o dispositivo automático. Seguidamente avaliaram-se as diferenças entre os valores obtidos pelos dispositivos para a pressão arterial sistólica (PAS) e diastólica (PAD), classificando-as em três níveis (≤ 5, ≤ 10 ou ≤ 15 mmHg). O número das diferenças em cada nível foi comparado ao requerido pelo Protocolo (fase 1.1). Para cada sujeito foi ainda determinado o número de diferenças com valores ≤ 5 mmHg. Pelo menos 24 dos 33 indivíduos devem ter 2 ou 3 diferenças com valores ≤ 5 mmHg e no máximo 3 dos 33 indivíduos podem apresentar as 3 diferenças com valores > 5 mmHg (fase 1.2). Resultados – O dispositivo OMRON® M6 Comfort foi aprovado nas fases 1.1 e 1.2 para a PAS e PAD. A média das diferenças entre as medições da PA, determinada pelos dispositivos automático e manual, foi de -0,82 ± 5,62 mmHg para a PAS e 2,14 ± 5,15 mmHg para a PAD. Considerações Finais – O dispositivo OMRON® M6 Comfort é válido para a medição da PA em adultos, de acordo com o Protocolo Internacional da ESH, de 2010. - ABSTRACT - Introduction – The measurement of blood pressure (BP) using automatic devices is often performed in clinical practice and self-measurement allowing the acquisition of reliable information for the diagnosis, monitoring and treatment of hypertension. However not all of the automated devices available in the market are validated in accordance with the existing protocols for this purpose. The purpose of this study was to confirm the validation of the automatic measuring device of the BP, OMRON® M6 Comfort, according to the “European Society of Hypertension International Protocol revision 2010 for the validation of blood pressure measuring devices in adults”. Methodology – The study involved 33 subjects, in each one of them, 9 sequential measurements of BP were performed, in the left arm, with the aneroid sphygmomanometer alternating with the automatic device. Afterwards, the differences on the values obtained by the different devices were evaluated, for systolic blood pressure (SBP) and diastolic (DBP), and these differences were then classified into three levels (≤ 5, ≤ 10 or ≤ 15 mmHg). The number of differences at each level was compared to the number required by the protocol (phase 1.1). For each subject the number of differences with values ≤ 5 mmHg was also determined. At least 24 of the 33 subjects should have 2 or 3 differences with values ≤ 5 mmHg and a maximum of 3 of the 33 subjects may have all differences with values > 5 mmHg (phase 1.2). Results – The device OMRON M6 Comfort ® was approved in phases 1.1 and 1.2 for SBP and DBP. The average difference between measurements of BP, as determined by automatic and manual devices, was -0.82 ± 5.62 mmHg for SBP and 2.14 ± 5.15 mmHg for DBP. Conclusion – The device OMRON M6 Comfort® is valid for measuring BP in adults, according to the ESH International Protocol of 2010.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The large increase of renewable energy sources and Distributed Generation (DG) of electricity gives place to the Virtual Power Producer (VPP) concept. VPPs may turn electricity generation by renewable sources valuable in electricity markets. Information availability and adequate decision-support tools are crucial for achieving VPPs’ goals. This involves information concerning associated producers and market operation. This paper presents ViProd, a simulation tool that allows simulating VPPs operation, focusing mainly in the information requirements for adequate decision making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To select each node by devices and by contexts in urban computing, users have to put their plan information and their requests into a computing environment (ex. PDA, Smart Devices, Laptops, etc.) in advance and they will try to keep the optimized states between users and the computing environment. However, because of bad contexts, users may get the wrong decision, so, one of the users’ demands may be requesting the good server which has higher security. To take this issue, we define the structure of Dynamic State Information (DSI) which takes a process about security including the relevant factors in sending/receiving contexts, which select the best during user movement with server quality and security states from DSI. Finally, whenever some information changes, users and devices get the notices including security factors, then an automatic reaction can be possible; therefore all users can safely use all devices in urban computing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a novel approach to explore DNA nucleotide sequence data, aiming to produce high-level categorical and structural information about the underlying chromosomes, genomes and species. The article starts by analyzing chromosomal data through histograms using fixed length DNA sequences. After creating the DNA-related histograms, a correlation between pairs of histograms is computed, producing a global correlation matrix. These data are then used as input to several data processing methods for information extraction and tabular/graphical output generation. A set of 18 species is processed and the extensive results reveal that the proposed method is able to generate significant and diversified outputs, in good accordance with current scientific knowledge in domains such as genomics and phylogenetics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: The present paper deals with the issue of the increasing usage of corporation mergers and acquisitions strategies within pharmaceutical industry environment. The aim is to identify the triggers of such business phenomenon and the immediate impact on the financial outcome of two powerful biopharmaceutical corporations: Pfizer and GlaxoSmithKline, which have been sampled due to their successful approach of the tactics in question. Materials and Methods: In order to create an overview of the development steps through mergers and acquisitions, the historical data of the two corporations has been consulted, from their official websites. The most relevant events were then associated with adequate information from the financial reports and statements of the two corporations indulged by web-based financial data providers. Results and Discussions: In the past few decades Pfizer and GlaxoSmithKline have purchased or merged with various companies in order to monopolize new markets, diversify products and services portfolios, survive and surpass competitors. The consequences proved to be positive although this approach implies certain capital availability. Conclusions: Results reveal the fact that, as far as the two sampled companies are concerned, acquisitions and mergers are reactions at the pressure of the highly competitive environment. Moreover, the continuous diversification of the market’s needs is also a consistent motive. However, the prevalence and the eminence of mergers and acquisition strategies are conditioned by the tender offer, the announcer’s caliber, research and development status and further other factors determined by the internal and external actors of the market.