931 resultados para information value


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Auditoria, sob orientação da Exma. Senhora Doutora Susana Adelina Moreira Carvalho Bastos

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO: Na parte inicial incluem-se algumas notas sucintas com base no panorama científico,histórico e cultural da visão considerada segundo três abordagens - o olho (o olho humano na especificidade da sua posição filogenética, elemento anátomo-funcional básico do sistema visual ao qual o cérebro pertence), os olhos (unidades gémeas essenciais do rosto na sua actividade consensual e conjugada da binocularidade), o olhar (carregado de expressão psicológica e o seu efeito sobre o observador, sinal para o comportamento e criador de sentimentos, sedimentado em obras de arte e em formas de superstição dos povos). Segue-se a apresentação de um estudo descritivo transversal, como contribuição para o conhecimento do estado de saúde visual da população infantil da região de Lisboa e determinar factores que o influenciam. Entre Outubro de 2005 e Agosto de 2006 examinaram-se 649 crianças com idade inferior a 10 anos da Consulta de Oftalmologia Pediátrica dos Serviços de Assistência Médico-Social do Sindicato dos Bancários do Sul e Ilhas (SAMS). Colheram-se dados respeitantes a mais de 250 variáveis primárias que cobriram a maior parte dos itens do exame oftalmológico habitual. Na análise dos dados teve-se especialmente em conta a idade, com um papel decisivo nas principais fases de desenvolvimento do sistema visual. No caso das crianças de 6 a 7 anos de idade põem-se lado a lado resultados dos SAMS e das Escolas. A profusão de dados numéricos ditou a necessidade da determinação frequente da significância estatística dos resultados de subgrupos. Alguns resultados do estudo, na sua maioria do grupo SAMS: Crianças de 6-7 anos, 71,1% (SAMS) e 91,5% (Escolas) não tinham sido examinadas com menos de 4 anos. Frequência global de alterações miópicas 9,4%, de alterações hipermetrópicas 25,3%, umas e outras com variações acentuadas com a idade. Estrabismo convergente 3,9%. Ambliopia 2,6% (13/491 crianças >=4 anos de idade), mais frequente no sexo feminino, naquelas que tiveram a sua 1ª observação depois dos 4 anos e em que os pais não aderiam à terapêutica prescrita. Objectivos específicos ocuparam-se da acuidade visual e da refracção ocular. O estudo comparativo da refractometria automática sem e com cicloplegia permitiu evidenciar que o teste da acuidade visual é insuficiente, por si só, para fazer o diagnóstico correcto. A análise dos antecedentes familiares oftalmológicos demonstrou a importância do seu conhecimento e pôs em evidência, entre outras, as seguintes relações: 10 pag1.qxp 27-11-2001 18:28 Page 10 Índice Geral 11 Crianças com antecedentes de alterações miópicas têm maior frequência de diagnóstico de alterações miópicas e de refracção negativa, uma taxa mais elevada de correspondência quantitativa diagnóstico/refracção nas alterações miópicas. Estas crianças também têm, em geral, características inversas no que diz respeito a alterações hipermetrópicas. Crianças com antecedentes de alterações hipermetrópicas têm maior frequência de diagnóstico de alterações hipermetrópicas. Crianças com antecedentes de estrabismo têm maior frequência de diagnóstico de estrabismo convergente manifesto e de esodesvios no seu todo. Crianças com antecedentes familiares de astigmatismo têm maior frequência de diagnóstico de astigmatismo. Traçam-se alguns perfis oftalmológicos infantis que permitem apreciar de forma sinóptica um conjunto de parâmetros da saúde da visão. Os dados colhidos sobre a aderência dos pais à terapêutica prescrita e sobre a atitude em relação ao uso de óculos assim como os dados sobre o comportamento da criança na sala de aula e dificuldades de aprendizagem foram em geral escassos para permitirem tirar conclusões, embora mostrem indícios a investigar futuramente. Paralelamente ortoptistas e enfermeiras efectuaram um rastreio escolar da acuidade visual <0,8 e de alterações da motilidade ocular extrínseca que abrangeu 520 alunos do 1º ano do 1º ciclo do ensino básico (2005/2006) das escolas públicas da cidade de Lisboa. 101 destas crianças foram observadas no consultório da autora, umas referidas a partir do rastreio, outras como controlo deste. Quanto à acuidade visual o valor preditivo do teste negativo foi de 91% mas o do teste positivo de apenas 67% (33% de falsos positivos, consequentemente uma alta taxa de sobrerreferenciação). A qualidade do rastreio efectuado por ortoptistas foi inferior à do efectuado por enfermeiras. O rastreio não teve qualidade aceitável. Foi feito um inquérito a médicos e enfermeiros de centros de saúde sobre conhecimentos, atitudes e práticas em relação com os cuidados de oftalmologia pediátrica. Discutem-se os resultados, tiram-se conclusões e fazem-se recomendações susceptíveis de contribuir para uma melhor saúde visual das crianças. ABSTRACT: Firstly some brief remarks are made based on the scientific, historical and cultural panorama of the human vision with regard to three approaches: the eye (the human eye in its specific filogenetic place, fundamental anatomofunctional element of the visual system in interaction with the brain), the eyes (essential twin units of the face with their consensual and conjugated binocular activity), the gaze (psychologicaly overloaded, a means to express oneself and to influence the observer, a guide to other persons' behaviour, consolidated in works of art and in people's traditional superstitious believes and ways of thinking). A report is made on a cross-sectional descriptive study whose goal is to contribute to the knowledge of the level of visual health of children in the Lisbon Region and to identify factors which determine it. Between October 2005 and August 2006 649 children under 10 years were observed at the pediatric ophthalmologic consultation in the SAMS (Serviços de Assistência Médico-Social do Sindicato dos Bancários do Sul e Ilhas). Data were collected concerning more than 250 primary variables covering most itens of the usual ophthalmological examination. Special attention was paid to children's age since it plays a crucial role in main stages of visual system development. In the case of children age 6 to 7 SAMS and school results are often put side by side. On account of the great number of numerical data it was often necessary to look at the degree of statistical significancy of differencies between subgroups. Some of the study's results (mostly SAMS): Children age 6 to 7 - 71,1% (SAMS) and 91,5% (Schools) had not an ophthalmologic examination before 4 years old. Total frequency of myopic disorders 9,4%, of hypermetropic disorders 25,3%, both showing great differences between age groups; convergent strabismus 3,9%; amblyopia 2,6% (13/491 children over 3 years old), more frequent among little girls, in those with 1st examination after 4 years old and in those whose parents didn´t complied to the therapy ordered for the child. Specific objectives dealt with visual acuity and ocular refraction. The comparison of automatic refractometry without and with cycloplegy showed that visual acuity testing is often not enough for a correct diagnosis. Eye disorders in the family history proved to be a very important information. Analysis of corresponding data disclosed a lot of relationships among others: 12 pag1.qxp 27-11-2001 18:28 Page 12 Índice Geral 13 Children with a family history of myopic disorders have more frequently a diagnosis of myopic disorders and a negative refraction, a higher rate of quantitative diagnosis/refraction matching concerning myopic disorders. Those children have in general inverse characteristics regarding hypermetropic disorders. Children with a family history of hypermetropic disorders have more frequently a diagnosis of hypermetropic disorders. Children with a family history of strabismus have more frequently a diagnosis of manifest convergent strabismus and all forms of esodeviations. Children with a family history of astigmatism have more frequently a diagnosis of astigmatism. Ophthalmologic profiles are drawn allowing to take into account in a synoptic way a set of visual health parameters. Data on parents' compliance with therapy ordered for the child, and attitudes regarding child's glass wearing, as well as data on child's behaviour in the classroom and learning difficulties were as a rule too few to allow conclusions but still need more studies in the future. Orthoptists and nurses performed in the same study period a screening of visual acuity <0,8 and of ocular motility disorders addressed to children of 1srt degree of public schools (term 2005/2006) in the town of Lisbon. 520 of such children were screened. 101 of them were examined by the author in her medical office; some were refered, the others taken as a control. Regarding visual acuity the predictive value of a negative test was 91% but the predictive value of a positive test was only 67% (33% of false positive results, consequently a too high rate of overreferal). Performed by orthoptists screening quality was inferior in comparison with screening done by nurses. On the whole this screening had not the required quality. A survey on physicians' and nurses' knowledge, attitudes and practices related to pediatric ophthalmologic care was carried out in health centers. Results are discussed, conclusions drawn. Some suggestions are made aiming at a better children's visual health.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper formulates a novel expression for entropy inspired in the properties of Fractional Calculus. The characteristics of the generalized fractional entropy are tested both in standard probability distributions and real world data series. The results reveal that tuning the fractional order allow an high sensitivity to the signal evolution, which is useful in describing the dynamics of complex systems. The concepts are also extended to relative distances and tested with several sets of data, confirming the goodness of the generalization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Feature discretization (FD) techniques often yield adequate and compact representations of the data, suitable for machine learning and pattern recognition problems. These representations usually decrease the training time, yielding higher classification accuracy while allowing for humans to better understand and visualize the data, as compared to the use of the original features. This paper proposes two new FD techniques. The first one is based on the well-known Linde-Buzo-Gray quantization algorithm, coupled with a relevance criterion, being able perform unsupervised, supervised, or semi-supervised discretization. The second technique works in supervised mode, being based on the maximization of the mutual information between each discrete feature and the class label. Our experimental results on standard benchmark datasets show that these techniques scale up to high-dimensional data, attaining in many cases better accuracy than existing unsupervised and supervised FD approaches, while using fewer discretization intervals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays the incredible grow of mobile devices market led to the need for location-aware applications. However, sometimes person location is difficult to obtain, since most of these devices only have a GPS (Global Positioning System) chip to retrieve location. In order to suppress this limitation and to provide location everywhere (even where a structured environment doesn’t exist) a wearable inertial navigation system is proposed, which is a convenient way to track people in situations where other localization systems fail. The system combines pedestrian dead reckoning with GPS, using widely available, low-cost and low-power hardware components. The system innovation is the information fusion and the use of probabilistic methods to learn persons gait behavior to correct, in real-time, the drift errors given by the sensors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biomaterials have been extensively developed and applied in medical devices. Among these materials, bioabsorbable polymers have attracted special attention for orthopedic applications where a transient existence of an implant can provide better results, when compared with permanent implants. Chitosan, a natural biopolymer, has generated enormous interest due to its various advantages such as biocompatibility, biodegradability and osteoconductive properties. In this paper, an assessment of the potential of a developed innovative production process of 3D solid and dense chitosan-based products for biomedical applications is performed and presented. Therefore, it starts with a brief explanation of the technology, highlighting its main features. Then, several potential applications and their markets were identified and assessed. After choosing a primary application and market, its potential as well as its uncertainties and risks were identified. A business model suggesting how to materialize the value from the application was sketched. After that, a brief description of the market as well as the identification of the main competitors and their distinctive features was made. The supply chain analysis and the go-to-market strategy were the following steps. In the end, a final recommendation based on the assessment of the information was prepared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lean Thinking is an important pillar in the success of any program of continuous improvement process. Its tools are useful means in the analysis, control and organization of important data for correct decision making in organizations. This project had as main objective the design of a program of quality improvement in Eurico Ferreira, S.A., based on the evaluation of customer satisfaction and the implementation of 5S. Subsequently, we have selected which business area of the company to address. After the selection, there was an initial diagnostic procedure, identifying the various points of improvement to which some tools of Lean Thinking have been applied, in particular Value Stream Mapping and 5S methodology. With the first, we were able to map the current state of the process in which all stakeholders were represented as well as the flow of materials and information throughout the process. The 5S methodology allowed to act on the wastage, identifying and implementing various process improvements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The choice of an information systems is a critical factor of success in an organization's performance, since, by involving multiple decision-makers, with often conflicting objectives, several alternatives with aggressive marketing, makes it particularly complex by the scope of a consensus. The main objective of this work is to make the analysis and selection of a information system to support the school management, pedagogical and administrative components, using a multicriteria decision aid system – MMASSITI – Multicriteria Method- ology to Support the Selection of Information Systems/Information Technologies – integrates a multicriteria model that seeks to provide a systematic approach in the process of choice of Information Systems, able to produce sustained recommendations concerning the decision scope. Its application to a case study has identi- fied the relevant factors in the selection process of school educational and management information system and get a solution that allows the decision maker’ to compare the quality of the various alternatives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Contabilidade e Finanças, sob orientação de Drª Mónica D’Orey

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Química - Ramo Optimização Energética na Indústria Química

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A presente dissertação centrou-se no estudo técnico-económico de dois cenários futuros para a continuação de fornecimento de energia térmica a um complexo de piscinas existente na região do vale do Tâmega. Neste momento a central de cogeração existente excedeu a sua licença de utilização e necessita de ser substituída. Os dois cenários em estudo são a compra de uma nova caldeira, a gás natural, para suprir as necessidades térmicas da caldeira existente a fuelóleo, ou o uso de um sistema de cogeração compacto que poderá estar disponível numa empresa do grupo. No primeiro cenário o investimento envolvido é cerca de 456 640 € sem proveitos de outra ordem para além dos requisitos térmicos, mas no segundo cenário os resultados são bem diferentes, mesmo que tenha de ser realizado o investimento de 1 000 000 € na instalação. Para este cenário foi efetuado um levantamento da legislação nacional no que toca à cogeração, recolheram-se dados do edifício como: horas de funcionamento, número de utentes, consumos de energia elétrica, térmica, água, temperatura da água das piscinas, temperatura do ar da nave, assim como as principais características da instalação de cogeração compacta. Com esta informação realizou-se o balanço de massa e energia e criou-se um modelo da nova instalação em software de modelação processual (Aspen Plus® da AspenTech). Os rendimentos térmico e elétrico obtidos da nova central de cogeração compacta foram, respetivamente, de 38,1% e 39,8%, com uma percentagem de perdas de 12,5% o que determinou um rendimento global de 78%. A avaliação da poupança de energia primária para esta instalação de cogeração compacta foi de 19,6 % o que permitiu concluir que é de elevada eficiência. O modelo criado permitiu compreender as necessidades energéticas, determinar alguns custos associados ao processo e simular o funcionamento da unidade com diferentes temperaturas de ar ambiente (cenários de verão e inverno com temperaturas médias de 20ºC e 5ºC). Os resultados revelaram uma diminuição de 1,14 €/h no custo da electricidade e um aumento do consumo de gás natural de 62,47 €/h durante o período mais frio no inverno devido ao aumento das perdas provocadas pela diminuição da temperatura exterior. Com esta nova unidade de cogeração compacta a poupança total anual pode ser, em média, de 267 780 € admitindo um valor para a manutenção de 97 698 €/ano. Se assim for, o projeto apresenta um retorno do investimento ao fim de 5 anos, com um VAL de 1 030 430 € e uma taxa interna de rentabilidade (TIR) de 14% (positiva, se se considerar a taxa de atualização do investimento de 3% para 15 anos de vida). Apesar do custo inicial ser elevado, os parâmetros económicos mostram que o projeto tem viabilidade económica e dará lucro durante cerca de 9 anos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this work is to report on the development of a multi-criteria methodology to support the assessment and selection of an Information System (IS) framework in a business context. The objective is to select a technological partner that provides the engine to be the basis for the development of a customized application for shrinkage reduction on the supply chains management. Furthermore, the proposed methodology di ers from most of the ones previously proposed in the sense that 1) it provides the decision makers with a set of pre-defined criteria along with their description and suggestions on how to measure them and 2)it uses a continuous scale with two reference levels and thus no normalization of the valuations is required. The methodology here proposed is has been designed to be easy to understand and use, without a specific support of a decision making analyst.