73 resultados para Dunnings Eclectic Paradigm

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Fatty liver disease (FLD) is an increasing prevalent disease that can be reversed if detected early. Ultrasound is the safest and ubiquitous method for identifying FLD. Since expert sonographers are required to accurately interpret the liver ultrasound images, lack of the same will result in interobserver variability. For more objective interpretation, high accuracy, and quick second opinions, computer aided diagnostic (CAD) techniques may be exploited. The purpose of this work is to develop one such CAD technique for accurate classification of normal livers and abnormal livers affected by FLD. METHODS: In this paper, the authors present a CAD technique (called Symtosis) that uses a novel combination of significant features based on the texture, wavelet transform, and higher order spectra of the liver ultrasound images in various supervised learning-based classifiers in order to determine parameters that classify normal and FLD-affected abnormal livers. RESULTS: On evaluating the proposed technique on a database of 58 abnormal and 42 normal liver ultrasound images, the authors were able to achieve a high classification accuracy of 93.3% using the decision tree classifier. CONCLUSIONS: This high accuracy added to the completely automated classification procedure makes the authors' proposed technique highly suitable for clinical deployment and usage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today, information overload and the lack of systems that enable locating employees with the right knowledge or skills are common challenges that large organisations face. This makes knowledge workers to re-invent the wheel and have problems to retrieve information from both internal and external resources. In addition, information is dynamically changing and ownership of data is moving from corporations to the individuals. However, there is a set of web based tools that may cause a major progress in the way people collaborate and share their knowledge. This article aims to analyse the impact of ‘Web 2.0’ on organisational knowledge strategies. A comprehensive literature review was done to present the academic background followed by a review of current ‘Web 2.0’ technologies and assessment of their strengths and weaknesses. As the framework of this study is oriented to business applications, the characteristics of the involved segments and tools were reviewed from an organisational point of view. Moreover, the ‘Enterprise 2.0’ paradigm does not only imply tools but also changes the way people collaborate, the way the work is done (processes) and finally impacts on other technologies. Finally, gaps in the literature in this area are outlined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exposure assessment is an important step of risk assessment process and has evolved more quickly than perhaps any aspect of the four-step risk paradigm (hazard identification, exposure assessment, dose-response analysis, and risk characterization). Nevertheless, some epidemiological studies have associated adverse health effects to a chemical exposure with an inadequate or absent exposure quantification. In addition to the metric used, the truly representation of exposure by measurements depends on: the strategy of sampling, random collection of measurements, and similarity between the measured and unmeasured exposure groups. Two environmental monitoring methodologies for formaldehyde occupational exposure were used to assess the influence of metric selection in exposure assessment and, consequently, in risk assessment process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente projecto tem como objectivo a disponibilização de uma plataforma de serviços para gestão e contabilização de tempo remunerável, através da marcação de horas de trabalho, férias e faltas (com ou sem justificação). Pretende-se a disponibilização de relatórios com base nesta informação e a possibilidade de análise automática dos dados, como por exemplo excesso de faltas e férias sobrepostas de trabalhadores. A ênfase do projecto está na disponibilização de uma arquitectura que facilite a inclusão destas funcionalidades. O projecto está implementado sobre a plataforma Google App Engine (i.e. GAE), de forma a disponibilizar uma solução sob o paradigma de Software as a Service, com garantia de disponibilidade e replicação de dados. A plataforma foi escolhida a partir da análise das principais plataformas cloud existentes: Google App Engine, Windows Azure e Amazon Web Services. Foram analisadas as características de cada plataforma, nomeadamente os modelos de programação, os modelos de dados disponibilizados, os serviços existentes e respectivos custos. A escolha da plataforma foi realizada com base nas suas características à data de iniciação do presente projecto. A solução está estruturada em camadas, com as seguintes componentes: interface da plataforma, lógica de negócio e lógica de acesso a dados. A interface disponibilizada está concebida com observação dos princípios arquitecturais REST, suportando dados nos formatos JSON e XML. A esta arquitectura base foi acrescentada uma componente de autorização, suportada em Spring-Security, sendo a autenticação delegada para os serviços Google Acounts. De forma a permitir o desacoplamento entre as várias camadas foi utilizado o padrão Dependency Injection. A utilização deste padrão reduz a dependência das tecnologias utilizadas nas diversas camadas. Foi implementado um protótipo, para a demonstração do trabalho realizado, que permite interagir com as funcionalidades do serviço implementadas, via pedidos AJAX. Neste protótipo tirou-se partido de várias bibliotecas javascript e padrões que simplificaram a sua realização, tal como o model-view-viewmodel através de data binding. Para dar suporte ao desenvolvimento do projecto foi adoptada uma abordagem de desenvolvimento ágil, baseada em Scrum, de forma a implementar os requisitos do sistema, expressos em user stories. De forma a garantir a qualidade da implementação do serviço foram realizados testes unitários, sendo também feita previamente a análise da funcionalidade e posteriormente produzida a documentação recorrendo a diagramas UML.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Associado à escassez dos combustíveis fósseis e ao desejado controlo de emissões nocivas para a atmosfera, assistimos no mundo ao desenvolvimento do um novo paradigma — a mobilidade eléctrica. Apesar das variações de maior ou menor arbítrio político dos governos, do excelente ou débil desenvolvimento tecnológico, relacionados com os veículos eléctricos, estamos perante um caminho, no que diz respeito à mobilidade eléctrica, que já não deve ser encarado como uma moda mas como uma orientação para o futuro da mobilidade. Portugal tendo dado mostras que pretende estar na dianteira deste desafio, necessita equacionar e compreender em que condições existirá uma infra-estrutura nacional capaz de fazer o veículo eléctrico vingar. Assim, neste trabalho, analisa-se o impacto da mobilidade eléctrica em algumas dessas infra-estruturas, nomeadamente nos edifícios multi-habitacionais e redes de distribuição em baixa tensão. São criados neste âmbito, quatro perfis de carregamento dos EVs nomeadamente: nas horas de chegada a casa; nas horas de vazio com início programado pelo condutor; nas horas de vazio controlado por operador de rede (“Smart Grid”); e um cenário que contempla a utilização do V2G. Com a obrigação legal de nos novos edifícios serem instaladas tomadas para veículos eléctricos, é estudado, com os cenários anteriores a possibilidade de continuar a conceber as instalações eléctricas, sem alterar algumas das disposições legais, ao abrigo dos regulamentos existentes. É também estudado, com os cenários criados e com a previsão da venda de veículos eléctricos até 2020, o impacto deste novo consumo no diagrama de carga do Sistema Eléctrico Nacional. Mostra-se assim que a introdução de sistemas inteligentes de distribuição de energia [Smartgrid e vehicle to grid” (V2G)] deverá ser encarada como a solução que por excelência contribuirá para um aproveitamento das infra-estruturas existentes e simultaneamente um uso acessível para os veículos eléctricos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this longitudinal studywas to investigate the effect of a set of factors from multiple levels of influence: infant temperament, infant regulatory behavior, and maternal sensitivity on infant’s attachment. Our sample consisted of 48 infants born prematurely and their mothers. At 1 and 3 months of age, mothers described their infants’behavior using the Escala de Temperamento do Beb´e. At 3 months of age, infants’ capacity to regulate stress was evaluated during Tronick’s Face-to-Face Still-Face (FFSF) paradigm. At 9 months of age, mothers’ sensitivity was evaluated during free play using the CARE-Index. At 12 months of age, infants’ attachment security was assessed during Ainsworth’s Strange Situation. A total of 16 infants were classified as securely attached, 17 as insecure-avoidant, and 15 as insecure-resistant. Mothers of securely attached infantswere more likely than mothers of insecure infants to describe their infants as less difficult and to be more sensitive to their infants in free play. In turn, secure infants exhibited more positive responses during the Still-Face. Infants classified as insecureavoidant were more likely to self-comfort during the Still-Face and had mothers who were more controlling during free play. Insecure-resistant exhibited higher levels of negative arousal during the Still-Face and had mothers who were more unresponsive in free play. These findings show that attachment quality is influenced bymultiple factors, including infant temperament, coping behavior, and maternal sensitivity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

No centro desta reflexão teórica situa-se a conjuntura sociopolítica que fez (re)emergir o 1.º ciclo do ensino básico como “problema” de política educativa, ou seja, como terreno prioritário para o Estado, através do Governo, formular e executar um “modelo” de operacionalização da política de “Escola a Tempo Inteiro” (ETI). Analisamos esta política reportando-a a referenciais de representação de um “novo modelo educativo” (dimensão educativa), de um “novo paradigma de escola pública” (dimensão política) e de uma “nova conceção de administração da educação” (dimensão administrativa). O recurso ao quadro heurístico da “análise das políticas públicas” permite pôr em evidência as representações e a ação governativas. At the centre of this theoretical reflection is the socio-political conjuncture that (re)emerged the 1st. cycle of basic education as an education policy “problem”, that is to say, a priority ground to the State, through the Government, create and implement an operational “model” for “full-time school” policy. We analyse this policy by referring to the referential representation of a “new educational model” (educational dimension), a “new paradigm of public school” (political dimension) and a “new conception of educational administration” (administrative dimension). The use of the heuristic framework of “public policy analysis” allows us to highlight the representations and governmental action.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão de Serviços de Saúde.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plain radiography still accounts for the vast majority of imaging studies that are performed at multiple clinical instances. Digital detectors are now prominent in many imaging facilities and they are the main driving force towards filmless environments. There has been a working paradigm shift due to the functional separation of acquisition, visualization, and storage with deep impact in the imaging workflows. Moreover with direct digital detectors images are made available almost immediately. Digital radiology is now completely integrated in Picture Archiving and Communication System (PACS) environments governed by the Digital Imaging and Communications in Medicine (DICOM) standard. In this chapter a brief overview of PACS architectures and components is presented together with a necessarily brief account of the DICOM standard. Special focus is given to the DICOM digital radiology objects and how specific attributes may now be used to improve and increase the metadata repository associated with image data. Regular scrutiny of the metadata repository may serve as a valuable tool for improved, cost-effective, and multidimensional quality control procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular, when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single objective optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step. Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.