975 resultados para essential information


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tese de Doutoramento, Ciências do Mar (Ecologia Marinha), 26 de Novembro de 2013, Universidade dos Açores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge is central to the modern economy and society. Indeed, the knowledge society has transformed the concept of knowledge and is more and more aware of the need to overcome the lack of knowledge when has to make options or address its problems and dilemmas. One’s knowledge is less based on exact facts and more on hypotheses, perceptions or indications. Even when we use new computational artefacts and novel methodologies for problem solving, like the use of Group Decision Support Systems (GDSSs), the question of incomplete information is in most of the situations marginalized. On the other hand, common sense tells us that when a decision is made it is impossible to have a perception of all the information involved and the nature of its intrinsic quality. Therefore, something has to be made in terms of the information available and the process of its evaluation. It is under this framework that a Multi-valued Extended Logic Programming language will be used for knowledge representation and reasoning, leading to a model that embodies the Quality-of-Information (QoI) and its quantification, along the several stages of the decision-making process. In this way, it is possible to provide a measure of the value of the QoI that supports the decision itself. This model will be here presented in the context of a GDSS for VirtualECare, a system aimed at sustaining online healthcare services.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

10th International Phycological Congress, Orlando, Florida, USA, 4-10 de agosto 2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deep Ocean Species. The little that is known mostly comes from collected specimens. L.A. Rocha et al. Letter "Specimen collection: An essential tool" (23 May, 344: 814) brilliantly discuss the importance of specimen collection and present the evolution of collecting since the mid-19th century until our present strict codes and conducts. However, it is also important to emphasize the fact that the vast majority of deep ocean macro-organisms are only known to us because of collection and this is a strong argument that should be present in our actions as scientists. If the deep is considered the least known of Earth’s habitats (1% or so according to recent estimates) then what awesome collection of yet to discover species are still there to be properly described? As the authors point citing (1), something around 86% of species remain unknown. Voucher specimens are fundamental for the reasons pointed out and perhaps the vast depths of the World’s oceans are the best example of that importance. The resumed report of 2010 Census of Marine Life (2) showed that among the millions of specimens collected in both familiar and seldom-explored waters, the Census found more than 6,000 potentially new species and completed formal descriptions of more than 1,200 of them. It also found that a number of rare species are in fact common. Voucher specimens are essential and, again agreeing with L.A. Rocha et al. Letter (see above), the modern approach for collecting will not be a cause for extinctions but instead a valuable tool for knowledge, description and even, as seen above, a way to find out that supposed rare species may not be that rare and even prove to reach abundant populations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado, Biotecnologia em Controlo Biológico, 6 de Junho de 2013, Universidade dos Açores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Copyright © 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Auditoria, sob orientação do Profº Especialista Carlos Quelhas Martins

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada ao Instituto Politécnico do Porto para obtenção do Grau de Mestre em Logística Orientada por: Professor Doutor José António Baptista da Costa Coorientada por: Dr. Lourenço Fernando Gomes Pinheiro

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background - The eukaryotic cytosolic chaperonin CCT is a hetero-oligomeric complex formed by two rings connected back-to-back, each composed of eight distinct subunits (CCTalpha to CCTzeta). CCT complex mediates the folding, of a wide range of newly synthesised proteins including tubulin (alpha, beta and gamma) and actin, as quantitatively major substrates. Methodology/Principal findings - We disrupted the genes encoding CCTalpha and CCTdelta subunits in the ciliate Tetrahymena. Cells lacking the zygotic expression of either CCTalpha or CCTdelta showed a loss of cell body microtubules, failed to assemble new cilia and died within 2 cell cycles. We also show that loss of CCT subunit activity leads to axoneme shortening and splaying of tips of axonemal microtubules. An epitope-tagged CCTalpha rescued the gene knockout phenotype and localized primarily to the tips of cilia. A mutation in CCTalpha, G346E, at a residue also present in the related protein implicated in the Bardet Biedel Syndrome, BBS6, also caused defects in cilia and impaired CCTalpha localization in cilia. Conclusions/Significance - Our results demonstrate that the CCT subunits are essential and required for ciliary assembly and maintenance of axoneme structure, especially at the tips of cilia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.