27 resultados para Data-based Safety Evaluation


Relevância:

40.00% 40.00%

Publicador:

Resumo:

O estágio desenvolvido na empresa de construção Manuel da Graça Peixito, incidiu sobre Direcção e Gestão de Obra na execução de um projecto de reconversão urbanística a aplicar na AUGI 42 localizada no Casal do Sapo em Sesimbra. As áreas urbanas de génese ilegal, denominadas de AUGI, surgiram no inicio da década de 60, como um fenómeno que surgiu de forma a colmatar a carência no parque habitacional das periferias das grandes áreas metropolitanas do território nacional. O ambiente urbano gerado pela existência das AUGI, muitas vezes de proporções de grande dimensão, evidencia inúmeras carências e problemas a níveis sociais, económicos, urbanísticos e legais. A gestão de obra é uma actividade essencial na execução da obra e no planeamento de todas as tarefas a realizar com o melhor tratamento económico e financeiro. A direcção de obra tem como principais funções a selecção de recursos humanos, escolha e montagem dos órgãos de apoio logístico, a aquisição atempada e negociação de materiais. O Gestor e Director de Obra é colocado num ciclo operacional de optimização de recursos e eficiências, em que as duas funções, gestão e direcção de obra, são complementares e a abordagem do contexto interactivo do controlo da obra, em termos da produção, da gestão económica e financeira, da gestão do tempo, do cumprimento das normas de saúde e segurança no trabalho e no assegurar da qualidade, são claramente identificadas, enquanto veículo indispensável do cumprimento do contrato de empreitada. O processo de reconversão urbanística aplicado na AUGI 42 teve como estrutura de proposta a seguinte base: primeiro na recolha de dados relativo à AUGI 42 e na definição de um planeamento do faseamento numa estratégia de execução da empreitada; segundo na constituição e caracterização da execução de variadas infra-estruturas (rede de drenagem de esgotos domésticos e pluviais, rede de abastecimento de águas, rede de telecomunicações, rede eléctrica, rede de gás, rede viária e arranjos de espaços exteriores). Este processo e consequente proposta surgem como um contributo fundamental na melhoria da qualidade de vida das populações, como também da funcionalidade do sistema urbano que compõe as AUGI.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work the identification and diagnosis of various stages of chronic liver disease is addressed. The classification results of a support vector machine, a decision tree and a k-nearest neighbor classifier are compared. Ultrasound image intensity and textural features are jointly used with clinical and laboratorial data in the staging process. The classifiers training is performed by using a population of 97 patients at six different stages of chronic liver disease and a leave-one-out cross-validation strategy. The best results are obtained using the support vector machine with a radial-basis kernel, with 73.20% of overall accuracy. The good performance of the method is a promising indicator that it can be used, in a non invasive way, to provide reliable information about the chronic liver disease staging.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the advent of wearable sensing and mobile technologies, biosignals have seen an increasingly growing number of application areas, leading to the collection of large volumes of data. One of the difficulties in dealing with these data sets, and in the development of automated machine learning systems which use them as input, is the lack of reliable ground truth information. In this paper we present a new web-based platform for visualization, retrieval and annotation of biosignals by non-technical users, aimed at improving the process of ground truth collection for biomedical applications. Moreover, a novel extendable and scalable data representation model and persistency framework is presented. The results of the experimental evaluation with possible users has further confirmed the potential of the presented framework.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A racionalização do uso de medicamentos constitui-se como um fator contribuinte para a melhoria da segurança do doente, particularmente no que respeita à segurança na medicação, tendo-se tornado prioritária para as organizações e instituições de saúde. A avaliação do uso de medicamentos inapropriados no idoso constitui-se como uma medida que concorre para evitar, prevenir ou corrigir eventos adversos associados ao seu uso. As benzodiazepinas são uma das classes de medicamentos mais prescritas em idosos. No entanto, e apesar de sua utilidade clínica, algumas benzodiazepinas são consideradas inapropriadas nesta faixa etária por potenciarem o efeito sedativo e aumentar a incidência de quedas e fraturas. A longo prazo, na promoção da qualidade do sono, a sua efetividade é discutível já que a toma de uma benzodiazepina para a resolução de um problema como o sono, muitas vezes pontual, passa a ser um problema crónico de exigência de toma contínua, sem que a qualidade deste seja restabelecida, pondo em risco a segurança do doente. Este estudo tem como objetivo caracterizar o consumo de benzodiazepinas por idosos institucionalizados, numa instituição de longa permanência do concelho de Sesimbra, sua inapropriação e a relação com a qualidade de sono. Foi desenvolvido um estudo descritivo e transversal, assente no paradigma qualitativo, com a recolha de dados a decorrer em três momentos: registo de informação em grelha própria da caracterização sociodemográfica e da caracterização do consumo de benzodiazepinas; aplicação do índice de Katz para determinar a funcionalidade dos participantes; aplicação do questionário adaptado do Pittsburgh Sleep Quality Index para avaliação da qualidade do sono. A inapropriação foi avaliada pela aplicação dos critérios de Beers. Após aplicação dos critérios de inclusão (idade superior a 65 anos e capacidade funcional) aos 97 utentes da instituição, a amostra foi constituída por 51 utentes. Foi recolhido consentimento informado de todos os participantes. Os resultados obtidos mostram que 46% das benzodiazepinas consumidas são de duração intermédia de ação, observando-se ainda um valor considerável de consumo de benzodiazepinas de longa duração de ação (36%). Estes valores correspondem a um grau elevado de inapropriação, potenciando os riscos para a segurança do doente nesta faixa etária. O lorazepam 2,5mg é a benzodiazepina mais utilizada como hipnótico. Mas, apesar do consumo deste grupo de medicamentos 81,6% dos idosos que consumem benzodiazepinas não apresentam boa qualidade de sono (PSQI>5), enquanto 77% dos idosos que não consomem benzodiazepinas apresentam boa qualidade de sono (PSQI≤5). Nos idosos que consomem benzodiazepinas, a média de tempo despendido na cama até adormecer foi de cerca de 55 minutos, valor superior ao grupo que não consome benzodiazepinas, onde a média é de 27 minutos. Neste grupo de idosos, o consumo de benzodiazepinas não só é inapropriado como não contribuiu para uma melhoria na qualidade de sono nem para a segurança do doente, como são os consumidores de benzodiazepinas que apresentam uma pior qualidade de sono, nas suas várias dimensões.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Behavioral biometrics is one of the areas with growing interest within the biosignal research community. A recent trend in the field is ECG-based biometrics, where electrocardiographic (ECG) signals are used as input to the biometric system. Previous work has shown this to be a promising trait, with the potential to serve as a good complement to other existing, and already more established modalities, due to its intrinsic characteristics. In this paper, we propose a system for ECG biometrics centered on signals acquired at the subject's hand. Our work is based on a previously developed custom, non-intrusive sensing apparatus for data acquisition at the hands, and involved the pre-processing of the ECG signals, and evaluation of two classification approaches targeted at real-time or near real-time applications. Preliminary results show that this system leads to competitive results both for authentication and identification, and further validate the potential of ECG signals as a complementary modality in the toolbox of the biometric system designer.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Workflows have been successfully applied to express the decomposition of complex scientific applications. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from tasks specification, decentralizing the control of workflow activities allowing their tasks to run in distributed infrastructures, and supporting dynamic workflow reconfigurations. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on Process Networks, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures. Each AWA executes a task developed as a Java class with a generic interface allowing end-users to code their applications without low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables dynamic workflow reconfiguration. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to the Amazon (Elastic Computing EC2) Cloud.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we exploit the nonlinear property of the SiC multilayer devices to design an optical processor for error detection that enables reliable delivery of spectral data of four-wave mixing over unreliable communication channels. The SiC optical processor is realized by using double pin/pin a-SiC:H photodetector with front and back biased optical gating elements. Visible pulsed signals are transmitted together at different bit sequences. The combined optical signal is analyzed. Data show that the background acts as selector that picks one or more states by splitting portions of the input multi optical signals across the front and back photodiodes. Boolean operations such as EXOR and three bit addition are demonstrated optically, showing that when one or all of the inputs are present, the system will behave as an XOR gate representing the SUM. When two or three inputs are on, the system acts as AND gate indicating the present of the CARRY bit. Additional parity logic operations are performed using four incoming pulsed communication channels that are transmitted and checked for errors together. As a simple example of this approach, we describe an all-optical processor for error detection and then provide an experimental demonstration of this idea. (C) 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The SiC optical processor for error detection and correction is realized by using double pin/pin a-SiC:H photodetector with front and back biased optical gating elements. Data shows that the background act as selector that pick one or more states by splitting portions of the input multi optical signals across the front and back photodiodes. Boolean operations such as exclusive OR (EXOR) and three bit addition are demonstrated optically with a combination of such switching devices, showing that when one or all of the inputs are present the output will be amplified, the system will behave as an XOR gate representing the SUM. When two or three inputs are on, the system acts as AND gate indicating the present of the CARRY bit. Additional parity logic operations are performed by use of the four incoming pulsed communication channels that are transmitted and checked for errors together. As a simple example of this approach, we describe an all optical processor for error detection and correction and then, provide an experimental demonstration of this fault tolerant reversible system, in emerging nanotechnology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses the results of applied research on the eco-driving domain based on a huge data set produced from a fleet of Lisbon's public transportation buses for a three-year period. This data set is based on events automatically extracted from the control area network bus and enriched with GPS coordinates, weather conditions, and road information. We apply online analytical processing (OLAP) and knowledge discovery (KD) techniques to deal with the high volume of this data set and to determine the major factors that influence the average fuel consumption, and then classify the drivers involved according to their driving efficiency. Consequently, we identify the most appropriate driving practices and styles. Our findings show that introducing simple practices, such as optimal clutch, engine rotation, and engine running in idle, can reduce fuel consumption on average from 3 to 5l/100 km, meaning a saving of 30 l per bus on one day. These findings have been strongly considered in the drivers' training sessions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper a new method for self-localization of mobile robots, based on a PCA positioning sensor to operate in unstructured environments, is proposed and experimentally validated. The proposed PCA extension is able to perform the eigenvectors computation from a set of signals corrupted by missing data. The sensor package considered in this work contains a 2D depth sensor pointed upwards to the ceiling, providing depth images with missing data. The positioning sensor obtained is then integrated in a Linear Parameter Varying mobile robot model to obtain a self-localization system, based on linear Kalman filters, with globally stable position error estimates. A study consisting in adding synthetic random corrupted data to the captured depth images revealed that this extended PCA technique is able to reconstruct the signals, with improved accuracy. The self-localization system obtained is assessed in unstructured environments and the methodologies are validated even in the case of varying illumination conditions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.