823 resultados para whether court has power to extend time
Resumo:
Ethernet is the most popular LAN technology. Its low price and robustness, resulting from its wide acceptance and deployment, has created an eagerness to expand its responsibilities to the factory-floor, where real-time requirements are to be fulfilled. However, it is difficult to build a real-time control network using Ethernet, because its MAC protocol, the 1-persistent CSMA/CD protocol with the BEB collision resolution algorithm, has unpredictable delay characteristics. Many anticipate that the recent technological advances in Ethernet such as the emerging Fast/Gigabit Ethernet, micro-segmentation and full-duplex operation using switches will also enable it to support time-critical applications. This technical report provides a comprehensive look at the unpredictability inherent to Ethernet and at recent technological advances towards real-time operation.
Resumo:
Trabalho Final de Mestrado para obtenção de grau de Mestre em Engenharia Mecânica
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Publicidade e Marketing.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
PROFIBUS is an international standard (IEC 61158, EN 50170) for factory-floor communications, with several thousands of installations worldwide. Taking into account the increasing need for mobile devices in industrial environments, one obvious solution is to extend traditional wired PROFIBUS networks with wireless capabilities. In this paper, we outline the major aspects of a hybrid wired/wireless PROFIBUS-based architecture, where most of the design options were made in order to guarantee the real-time behaviour of the overall network. We also introduce the timing unpredictability problems resulting from the co-existence of heterogeneous physical media in the same network. However, the major focus of this paper is on how to guarantee real-time communications in such a hybrid network, where nodes (and whole segments) can move between different radio cells (inter-cell mobility). Assuming a simple mobility management mechanism based on mobile nodes performing periodic radio channel assessment and switching, we propose a methodology to compute values for specific parameters that enable an optimal (minimum) and bounded duration of the handoff procedure.
Resumo:
Consider a network where all nodes share a single broadcast domain such as a wired broadcast network. Nodes take sensor readings but individual sensor readings are not the most important pieces of data in the system. Instead, we are interested in aggregated quantities of the sensor readings such as minimum and maximum values, the number of nodes and the median among a set of sensor readings on different nodes. In this paper we show that a prioritized medium access control (MAC) protocol may advantageously be exploited to efficiently compute aggregated quantities of sensor readings. In this context, we propose a distributed algorithm that has a very low time and message-complexity for computing certain aggregated quantities. Importantly, we show that if every sensor node knows its geographical location, then sensor data can be interpolated with our novel distributed algorithm, and the message-complexity of the algorithm is independent of the number of nodes. Such an interpolation of sensor data can be used to compute any desired function; for example the temperature gradient in a room (e.g., industrial plant) densely populated with sensor nodes, or the gas concentration gradient within a pipeline or traffic tunnel.
Resumo:
A área da simulação computacional teve um rápido crescimento desde o seu apareciment, sendo actualmente uma das ciências de gestão e de investigação operacional mais utilizadas. O seu princípio baseia-se na replicação da operação de processos ou sistemas ao longo de períodos de tempo, tornando-se assim uma metodologia indispensável para a resolução de variados problemas do mundo real, independentemente da sua complexidade. Das inúmeras áreas de aplicação, nos mais diversos campos, a que mais se destaca é a utilização em sistemas de produção, onde o leque de aplicações disponível é muito vasto. A sua aplicação tem vindo a ser utilizada para solucionar problemas em sistemas de produção, uma vez que permite às empresas ajustar e planear de uma maneira rápida, eficaz e ponderada as suas operações e os seus sistemas, permitindo assim uma rápida adaptação das mesmas às constantes mudanças das necessidades da economia global. As aplicações e packages de simulação têm seguindo as tendências tecnológicas pelo que é notório o recurso a tecnologias orientadas a objectos para o desenvolvimento das mesmas. Este estudo baseou-se, numa primeira fase, na recolha de informação de suporte aos conceitos de modelação e simulação, bem como a respectiva aplicação a sistemas de produção em tempo real. Posteriormente centralizou-se no desenvolvimento de um protótipo de uma aplicação de simulação de ambientes de fabrico em tempo real. O desenvolvimento desta ferramenta teve em vista eventuais fins pedagógicos e uma utilização a nível académico, sendo esta capaz de simular um modelo de um sistema de produção, estando também dotada de animação. Sem deixar de parte a possibilidade de integração de outros módulos ou, até mesmo, em outras plataformas, houve ainda a preocupação acrescida de que a sua implementação recorresse a metodologias de desenvolvimento orientadas a objectos.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
In future power systems, in the smart grid and microgrids operation paradigms, consumers can be seen as an energy resource with decentralized and autonomous decisions in the energy management. It is expected that each consumer will manage not only the loads, but also small generation units, heating systems, storage systems, and electric vehicles. Each consumer can participate in different demand response events promoted by system operators or aggregation entities. This paper proposes an innovative method to manage the appliances on a house during a demand response event. The main contribution of this work is to include time constraints in resources management, and the context evaluation in order to ensure the required comfort levels. The dynamic resources management methodology allows a better resources’ management in a demand response event, mainly the ones of long duration, by changing the priorities of loads during the event. A case study with two scenarios is presented considering a demand response with 30 min duration, and another with 240 min (4 h). In both simulations, the demand response event proposes the power consumption reduction during the event. A total of 18 loads are used, including real and virtual ones, controlled by the presented house management system.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Este estudo tem como objetivos: (1) conhecer as práticas desenvolvidas numa organização do Ensino Superior Público Português; (2) conhecer a tipologia das práticas de GRH de cariz tradicional e de cariz estratégico; (3) perceber em que medida as práticas de GRH estão relacionadas com a área de qualificação dos responsáveis do departamento de RH; (4) averiguar o grau de satisfação que os trabalhadores sentem com as Práticas de Gestão de Recursos Humanos desenvolvidas e a sua relação com a área de qualificação dos responsáveis do departamento de RH. Foi utilizada uma metodologia mista, que possibilita ampliar a obtenção de resultados em abordagens investigativas, proporcionando ganhos relevantes para a pesquisa. É realizado um primeiro estudo exploratório, que utiliza uma metodologia mista quantitativa e qualitativa, com recurso a uma entrevista semiestruturada e inquérito realizados aos responsáveis de RH, e que tem como objetivos identificar e caracterizar as Práticas de GRH vigentes na Organização e, consequentemente, averiguar se se aproximam das designadas na literatura, assim como averiguar o grau de intervenção do DRH no desenvolvimento das PGRH e caraterizar o perfil do responsável de RH na Organização, averiguando se a área de formação de RH influencia as Práticas de GRH desenvolvidas. No segundo estudo, recorremos a uma metodologia quantitativa com recurso ao inquérito por questionário, aplicado aos trabalhadores que exercem funções a tempo integral, para averiguar o grau de satisfação dos trabalhadores em relação às Práticas de Gestão de Recursos Humanos. Na compilação dos dois estudos foi nosso objetivo obter respostas às questões que orientaram a nossa investigação. Na parte final da dissertação são discutidos os principais resultados obtidos e apresentadas as conclusões do estudo aqui levado a cabo. Os resultados sugerem que: 1) as PGRH existentes são essencialmente de cariz tradicional, em especial a gestão administrativa; 2) as PGRH predominantes são: o Planeamento de Recursos Humanos, a Análise e Descrição de Funções, o Recrutamento e Seleção, a Formação e Desenvolvimento, a Gestão Administrativa, a Comunicação e a Partilha de Informação, Ética e Deontologia e o Estatuto Disciplinar; 3) existe pouco recurso ao outsourcing para as PGRH; 4) o grau de intervenção DRH baseia-se em atividades de cariz mais administrativo; 5) as práticas tradicionais de RH são aquelas que requerem mais tempo ao DRH; 6) não existe relação entre o tipo de PGRH e a área de qualificação do responsável do DRH; 7) as PGRH são realizadas seguindo essencialmente as normas legais e regras rígidas da GRH na AP; 8) algumas PGRH não são entendidas em contexto da AP, como importantes pelos gestores, embora já sejam desenvolvidos alguns procedimentos dessas práticas; 9) a PGRH da formação e desenvolvimento não é corretamente desenvolvida e não dá cumprimento ao estipulado na lei; 10) a gestão de carreiras e o sistema de compensação e recompensas são entendidas como inexistentes, porque não existem promoções e progressões desde 2005; 11) a avaliação do desempenho é um sistema burocrático e ritualista com fins de promoção e compensação, sem efeitos práticos no momento atual, e que causa insatisfação e o sentimento de injustiça; 12) existem problemas de comunicação quanto a partilha e uniformização de procedimentos entre UO; 13) a satisfação dos trabalhadores é maior com as PGRH de tipo tradicional, nomeadamente na gestão administrativa, recrutamento e seleção, análise e descrição de funções, acolhimento, integração e socialização 14) a satisfação é menor na gestão de carreiras, no sistema de compensação e recompensas e na avaliação do desempenho; 15) quanto a relação entre o grau de satisfação e as características sócio demográficas e profissionais dos inquiridos, os casos com significância mostram que os trabalhadores com 10 ou mais anos de antiguidade tendem a sentir mais satisfação com as práticas em GRH; 16) existe mais satisfação dos trabalhadores das UO onde o responsável de DRH possui formação na área de RH.
Resumo:
No decorrer dos últimos anos tem-se verificado um acréscimo do número de sistemas de videovigilância presentes nos mais diversos ambientes, sendo que estes se encontram cada vez mais sofisticados. Os casinos são um exemplo bastante popular da utilização destes sistemas sofisticados, sendo que vários casinos, hoje em dia, utilizam câmeras para controlo automático das suas operações de jogo. No entanto, atualmente existem vários tipos de jogos em que o controlo automático ainda não se encontra disponível, sendo um destes, o jogo Banca Francesa. A presente dissertação tem como objetivo propor um conjunto de algoritmos idealizados para um sistema de controlo e gestão do jogo de casino Banca Francesa através do auxílio de componentes pertencentes à área da computação visual, tendo em conta os contributos mais relevantes e existentes na área, elaborados por investigadores e entidades relacionadas. No decorrer desta dissertação são apresentados quatro módulos distintos, os quais têm como objetivo auxiliar os casinos a prevenir o acontecimento de fraudes durante o decorrer das suas operações, assim como auxiliar na recolha automática de resultados de jogo. Os quatro módulos apresentados são os seguintes: Dice Sample Generator – Módulo proposto para criação de casos de teste em grande escala; Dice Sample Analyzer – Módulo proposto para a deteção de resultados de jogo; Dice Calibration – Módulo proposto para calibração automática do sistema; Motion Detection – Módulo proposto para a deteção de fraude no jogo. Por fim, para cada um dos módulos, é apresentado um conjunto de testes e análises de modo a verificar se é possível provar o conceito para cada uma das propostas apresentadas.
Resumo:
This paper proposes a methodology to increase the probability of delivering power to any load point through the identification of new investments. The methodology uses a fuzzy set approach to model the uncertainty of outage parameters, load and generation. A DC fuzzy multicriteria optimization model considering the Pareto front and based on mixed integer non-linear optimization programming is developed in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power to all customers in the distribution network at the minimum possible cost for the system operator, while minimizing the non supplied energy cost. To illustrate the application of the proposed methodology, the paper includes a case study which considers an 33 bus distribution network.
Resumo:
The development in power systems and the introduction of decentralized generation and Electric Vehicles (EVs), both connected to distribution networks, represents a major challenge in the planning and operation issues. This new paradigm requires a new energy resources management approach which considers not only the generation, but also the management of loads through demand response programs, energy storage units, EVs and other players in a liberalized electricity markets environment. This paper proposes a methodology to be used by Virtual Power Players (VPPs), concerning the energy resource scheduling in smart grids, considering day-ahead, hour-ahead and real-time scheduling. The case study considers a 33-bus distribution network with high penetration of distributed energy resources. The wind generation profile is based on a real Portuguese wind farm. Four scenarios are presented taking into account 0, 1, 2 and 5 periods (hours or minutes) ahead of the scheduling period in the hour-ahead and realtime scheduling.