920 resultados para supply chains and system supplier


Relevância:

100.00% 100.00%

Publicador:

Resumo:

O panorama atual da emergência e socorro de primeira linha em Portugal, carateriza-se por uma grande aposta ao longo dos últimos anos num incremento contínuo da qualidade e da eficiência que estes serviços prestam às populações locais. Com vista à prossecução do objetivo de melhoria contínua dos serviços, foram realizados ao longo dos últimos anos investimentos avultados ao nível dos recursos técnicos e ao nível da contratação e formação de recursos humanos altamente qualificados. Atualmente as instituições que prestam socorro e emergência de primeira linha estão bem dotadas ao nível físico e ao nível humano dos recursos necessários para fazerem face aos mais diversos tipos de ocorrências. Contudo, ao nível dos sistemas de informação de apoio à emergência e socorro de primeira linha, verifica-se uma inadequação (e por vezes inexistência) de sistemas informáticos capazes de suportar convenientemente o atual contexto de exigência e complexidade da emergência e socorro. Foi feita ao longo dos últimos anos, uma forte aposta na melhoria dos recursos físicos e dos recursos humanos encarregues da resposta àsemergência de primeira linha, mas descurou-se a área da gestão e análise da informação sobre as ocorrências, assim como, o delinear de possíveis estratégias de prevenção que uma análise sistematizada da informação sobre as ocorrências possibilita. Nas instituições de emergência e socorro de primeira linha em Portugal (bombeiros, proteção civil municipal, PSP, GNR, polícia municipal), prevalecem ainda hoje os sistemas informáticos apenas para o registo das ocorrências à posteriori e a total inexistência de sistemas de registo de informação e de apoio à decisão na alocação de recursos que operem em tempo real. A generalidade dos sistemas informáticos atualmente existentes nas instituições são unicamente de sistemas de backoffice, que não aproveitam a todas as potencialidades da informação operacional neles armazenada. Verificou-se também, que a geo-localização por via informática dos recursos físicos e de pontos de interesse relevantes em situações críticas é inexistente a este nível. Neste contexto, consideramos ser possível e importante alinhar o nível dos sistemas informáticos das instituições encarregues da emergência e socorro de primeira linha, com o nível dos recursos físicos e humanos que já dispõem atualmente. Dado que a emergência e socorro de primeira linha é um domínio claramente elegível para a aplicação de tecnologias provenientes dos domínios da inteligência artificial (nomeadamente sistemas periciais para apoio à decisão) e da geo-localização, decidimos no âmbito desta tese desenvolver um sistema informático capaz de colmatar muitas das lacunas por nós identificadas ao nível dos sistemas informáticos destas instituições. Pretendemos colocar as suas plataformas informáticas num nível similar ao dos seus recursos físicos e humanos. Assim, foram por nós identificadas duas áreas chave onde a implementação de sistemas informáticos adequados às reais necessidades das instituições podem ter um impacto muito proporcionar uma melhor gestão e otimização dos recursos físicos e humanos. As duas áreas chave por nós identificadas são o suporte à decisão na alocação dos recursos físicos e a geolocalização dos recursos físicos, das ocorrências e dos pontos de interesse. Procurando fornecer uma resposta válida e adequada a estas duas necessidades prementes, foi desenvolvido no âmbito desta tese o sistema CRITICAL DECISIONS. O sistema CRITICAL DECISIONS incorpora um conjunto de funcionalidades típicas de um sistema pericial, para o apoio na decisão de alocação de recursos físicos às ocorrências. A inferência automática dos recursos físicos, assenta num conjunto de regra de inferência armazenadas numa base de conhecimento, em constante crescimento e atualização, com base nas respostas bem sucedidas a ocorrências passadas. Para suprimir as carências aos nível da geo-localização dos recursos físicos, das ocorrências e dos pontos de interesse, o sistema CRITICAL DECISIONS incorpora também um conjunto de funcionalidades de geo-localização. Estas permitem a geo-localização de todos os recursos físicos da instituição, a geo-localização dos locais e as áreas das várias ocorrências, assim como, dos vários tipos de pontos de interesse. O sistema CRITICAL DECISIONS visa ainda suprimir um conjunto de outras carências por nós identificadas, ao nível da gestão documental (planos de emergência, plantas dos edifícios) , da comunicação, da partilha de informação entre as instituições de socorro e emergência locais, da contabilização dos tempos de serviço, entre outros. O sistema CRITICAL DECISIONS é o culminar de um esforço colaborativo e contínuo com várias instituições, responsáveis pela emergência e socorro de primeira linha a nível local. Esperamos com o sistema CRITICAL DECISIONS, dotar estas instituições de uma plataforma informática atual, inovadora, evolutiva, com baixos custos de implementação e de operação, capaz de proporcionar melhorias contínuas e significativas ao nível da qualidade da resposta às ocorrências, das capacidades de prevenção e de uma melhor otimização de todos os tipos de recursos que têm ao dispor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Admission controllers are used to prevent overload in systems with dynamically arriving tasks. Typically, these admission controllers are based on suÆcient (but not necessary) capacity bounds in order to maintain a low computational complexity. In this paper we present how exact admission-control for aperiodic tasks can be eÆciently obtained. Our rst result is an admission controller for purely aperiodic task sets where the test has the same runtime complexity as utilization-based tests. Our second result is an extension of the previous controller for a baseload of periodic tasks. The runtime complexity of this test is lower than for any known exact admission-controller. In addition to presenting our main algorithm and evaluating its performance, we also discuss some general issues concerning admission controllers and their implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buildings account for 40% of total energy consumption in the European Union. The reduction of energy consumption in the buildings sector constitute an important measure needed to reduce the Union's energy dependency and greenhouse gas emissions. The Portuguese legislation incorporate this principles in order to regulate the energy performance of buildings. This energy performance should be accompanied by good conditions for the occupants of the buildings. According to EN 15251 (2007) the four factors that affect the occupant comfort in the buildings are: Indoor Air Quality (IAQ), thermal comfort, acoustics and lighting. Ventilation directly affects all except the lighting, so it is crucial to understand the performance of it. The ventilation efficiency concept therefore earn significance, because it is an attempt to quantify a parameter that can easily distinguish the different options for air diffusion in the spaces. The two indicators most internationally accepted are the Air Change Efficiency (ACE) and the Contaminant Removal Effectiveness (CRE). Nowadays with the developed of the Computational Fluid Dynamics (CFD) the behaviour of ventilation can be more easily predicted. Thirteen strategies of air diffusion were measured in a test chamber through the application of the tracer gas method, with the objective to validate the calculation by the MicroFlo module of the IES-VE software for this two indicators. The main conclusions from this work were: that the values of the numerical simulations are in agreement with experimental measurements; the value of the CRE is more dependent of the position of the contamination source, that the strategy used for the air diffusion; the ACE indicator is more appropriate for quantifying the quality of the air diffusion; the solutions to be adopted, to maximize the ventilation efficiency should be, the schemes that operate with low speeds of supply air and small differences between supply air temperature and the room temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ventilation efficiency concept is an attempt to quantify a parameter that can easily distinguish the different options for air diffusion in the building spaces. Thirteen strategies of air diffusion were measured in a test chamber through the application of the tracer gas method, with the objective to validate the calculation by Computational fluid dynamics (CFD). Were compared the Air Change Efficiency (ACE) and the Contaminant Removal Effectiveness (CRE), the two indicators most internationally accepted. The main results from this work shows that the values of the numerical simulations are in good agreement with experimental measurements and also, that the solutions to be adopted for maximizing the ventilation efficiency should be the schemes that operate with low speeds of supply air and small differences between supply air temperature and the room temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A detailed knowledge of the 3-D arrangement and lateral facies relationships of the stacking patterns in coastal deposits is essential to approach many geological problems such as precise tracing of sea level changes, particularly during small scale fluctuations. These are useful data regarding the geodynamic evolution of basin margins and yield profit in oil exploration. Sediment supply, wave-and tidal processes, coastal morphology, and accommodation space generated by eustasy and tectonics govern the highly variable architecture of sedimentary bodies deposited in coastal settings. But these parameters change with time, and erosional surfaces may play a prominent role in areas located towards land. Besides, lateral shift of erosional or even depositional loci very often results in destruction of large parts of the sediment record. Several case studies illustrate some commonly found arrangements of facies and their distinguishing features. The final aim is to get the best results from the sedimentological analysis of coastal units.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During last decades there has been a trend to build collaboration platforms as enablers for groups of enterprises to jointly provide integrated services and products. As a result, the notion of business ecosystem is getting wider acceptance. However, a critical issue that is still open, despite some efforts in this area, is the identification of adequate performance indicators to measure and motivate sustainable collaboration. This work-in-progress addresses this concern, briefly presenting the state of the art of relevant contributing areas such as, collaborative networks, business ecosystems, enterprise performance indicators, social networks analysis, and supply chains. Complementarily, through an assessment of current gaps, the research challenges are identified and an approach for further development is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Progress in Industrial Ecology, An International Journal, nº 4(5), p. 363-381

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of competitive electricity markets has changed the consumers’ and distributed generation position power systems operation. The use of distributed generation and the participation in demand response programs, namely in smart grids, bring several advantages for consumers, aggregators, and system operators. The present paper proposes a remuneration structure for aggregated distributed generation and demand response resources. A virtual power player aggregates all the resources. The resources are aggregated in a certain number of clusters, each one corresponding to a distinct tariff group, according to the economic impact of the resulting remuneration tariff. The determined tariffs are intended to be used for several months. The aggregator can define the periodicity of the tariffs definition. The case study in this paper includes 218 consumers, and 66 distributed generation units.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose the Distributed using Optimal Priority Assignment (DOPA) heuristic that finds a feasible partitioning and priority assignment for distributed applications based on the linear transactional model. DOPA partitions the tasks and messages in the distributed system, and makes use of the Optimal Priority Assignment (OPA) algorithm known as Audsley’s algorithm, to find the priorities for that partition. The experimental results show how the use of the OPA algorithm increases in average the number of schedulable tasks and messages in a distributed system when compared to the use of Deadline Monotonic (DM) usually favoured in other works. Afterwards, we extend these results to the assignment of Parallel/Distributed applications and present a second heuristic named Parallel-DOPA (P-DOPA). In that case, we show how the partitioning process can be simplified by using the Distributed Stretch Transformation (DST), a parallel transaction transformation algorithm introduced in [1].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis submitted to the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia for the degree of Doctor of Philosophy in Environmental Engineering

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation to obtain the degree of Doctor of Philosophy in Electrical and Computer Engineering(Industrial Information Systems)