998 resultados para intra-domain routing
Resumo:
OBJETIVO: Identificar diferenciais intra-urbanos e fatores de risco na prevalência de baixo peso ao nascer. MÉTODOS: Foram utilizadas as informações das declarações de nascido vivo de mães residentes no município de São Paulo, obtidos do Sistema de Informação de Nascidos Vivos e Fundação Seade, para o período de 2002 e 2003, totalizando 368.980 nascidos vivos. Os endereços foram geo-referenciados em setores censitários e classificados em seis grupos de vulnerabilidade segundo o Índice Paulista de Vulnerabilidade Social. Para identificação dos possíveis fatores de risco empregou-se análise de regressão logística. RESULTADOS: Observou-se tendência de crescimento da prevalência de baixo peso ao nascer com aumento da vulnerabilidade (de 6,8% a 8,1%). Houve diferenças significativas entre os grupos quanto às características maternas, assistência pré-natal e da proporção de nascimentos de não pré-termo de baixo peso. No grupo de baixo peso não pré-termo, proxy da presença de retardo do crescimento intra-uterino, residir em áreas vulneráveis (1,29;1,17-1,43) e características socioeconômicas maternas desaforáveis, como mães adolescentes (1,13;1,04-1,22), baixa escolaridade (1,26;1,17-1,35) e elevada paridade (1,10;1,01-1,20) foram fatores de risco, assim como mães idosas (1,38;1,30-1,47), e sem companheiro (1,15;1,11-1,20). A ausência de pré-natal apresentou o maior risco de baixo peso para nascimentos de pré-termo (3,39;2,86-4,02) e não pré-termo (2,12;1,87-2,41). Houve redução de risco de baixo peso com o aumento de consultas de pré-natal para nascimentos de pré-termo e não pré-termo. CONCLUSÕES: Há diferenças de prevalência de baixo peso ao nascer segundo grupos de vulnerabilidade. A assistência pré-natal mostrou-se desigual segundo grupos de vulnerabilidade e seu elevado risco para o baixo peso ao nascer indica a importância de ampliar o acesso e qualidade dos serviços de saúde.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
Three commonly consumed and commercially valuable fish species (sardine, chub and horse mackerel) were collected from the Northeast and Eastern Central Atlantic Ocean in Portuguese waters during one year. Mercury, cadmium, lead and arsenic amounts were determined in muscles using graphite furnace and cold vapour atomic absorption spectrometry. Maximum mean levels of mercury (0.1715 ± 0.0857 mg/kg, ww) and arsenic (1.139 ± 0.350 mg/kg, ww) were detected in horse mackerel. The higher mean amounts of cadmium (0.0084 ± 0.0036 mg/kg, ww) and lead (0.0379 ± 0.0303 mg/kg, ww) were determined in chub mackerel and in sardine, respectively. Intra- and inter-specific variability of metals bioaccumulation was statistically assessed and species and length revealed to be the major influencing biometric factors, in particular for mercury and arsenic. Muscles present metal concentrations below the tolerable limits considered by European Commission Regulation and Food and Agriculture Organization of the United Nations/World Health Organization (FAO/WHO). However, estimation of non-carcinogenic and carcinogenic health risks by the target hazard quotient and target carcinogenic risk, established by the US Environmental Protection Agency, suggests that these species must be eaten in moderation due to possible hazard and carcinogenic risks derived from arsenic (in all analyzed species) and mercury ingestion (in horse and chub mackerel species).
Resumo:
In previous works we have proposed a hybrid wired/wireless PROFIBUS solution where the interconnection between the heterogeneous media was accomplished through bridge-like devices with wireless stations being able to move between different wireless cells. Additionally, we had also proposed a worst-case timing analysis assuming that stations were stationary. In this paper we advance these previous works by proposing a worst-case timing analysis for the system’s message streams considering the effect of inter-cell mobility.
Resumo:
Although power-line communication (PLC) is not a new technology, its use to support data communication with timing requirements is still the focus of ongoing research. A new infrastructure intended for communication using power lines from a central location to dispersed nodes using inexpensive devices was presented recently. This new infrastructure uses a two-level hierarchical power-line system, together with an IP-based network. Due to the master-slave behaviour of the PLC medium access, together with the inherent dynamic topology of power-line networks, a mechanism to provide end-to-end communication through the two levels of the power-line system must be provided. In this paper we introduce the architecture of the PLC protocol layer that is being implemented for this end.
Resumo:
The marriage of emerging information technologies with control technologies is a major driving force that, in the context of the factory-floor, is creating an enormous eagerness for extending the capabilities of currently available fieldbus networks to cover functionalities not considered up to a recent past. Providing wireless capabilities to such type of communication networks is a big share of that effort. The RFieldbus European project is just one example, where PROFIBUS was provided with suitable extensions for implementing hybrid wired/wireless communication systems. In RFieldbus, interoperability between wired and wireless components is achieved by the use specific intermediate networking systems operating as repeaters, thus creating a single logical ring (SLR) network. The main advantage of the SLR approach is that the effort for protocol extensions is not significant. However, a multiple logical ring (MLR) approach provides traffic and error isolation between different network segments. This concept was introduced in, where an approach for a bridge-based architecture was briefly outlined. This paper will focus on the details of the inter-Domain Protocol (IDP), which is responsible for handling transactions between different network domains (wired or wireless) running the PROFIBUS protocol.
Resumo:
Many-core platforms based on Network-on-Chip (NoC [Benini and De Micheli 2002]) present an emerging technology in the real-time embedded domain. Although the idea to group the applications previously executed on separated single-core devices, and accommodate them on an individual many-core chip offers various options for power savings, cost reductions and contributes to the overall system flexibility, its implementation is a non-trivial task. In this paper we address the issue of application mapping onto a NoCbased many-core platform when considering fundamentals and trends of current many-core operating systems, specifically, we elaborate on a limited migrative application model encompassing a message-passing paradigm as a communication primitive. As the main contribution, we formulate the problem of real-time application mapping, and propose a three-stage process to efficiently solve it. Through analysis it is assured that derived solutions guarantee the fulfilment of posed time constraints regarding worst-case communication latencies, and at the same time provide an environment to perform load balancing for e.g. thermal, energy, fault tolerance or performance reasons.We also propose several constraints regarding the topological structure of the application mapping, as well as the inter- and intra-application communication patterns, which efficiently solve the issues of pessimism and/or intractability when performing the analysis.
Resumo:
A large part of power dissipation in a system is generated by I/O devices. Increasingly these devices provide power saving mechanisms, inter alia to enhance battery life. While I/O device scheduling has been studied in the past for realtime systems, the use of energy resources by these scheduling algorithms may be improved. These approaches are crafted considering a very large overhead of device transitions. Technology enhancements have allowed the hardware vendors to reduce the device transition overhead and energy consumption. We propose an intra-task device scheduling algorithm for real time systems that allows to shut-down devices while ensuring system schedulability. Our results show an energy gain of up to 90% when compared to the techniques proposed in the state-of-the-art.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a two-type heterogeneous multiprocessor platform. Each processor is either of type-1 or type-2 with each task having different execution time on each processor type. Jobs can migrate between processors of same type (referred to as intra-type migration) but cannot migrate between processors of different types. We present a new scheduling algorithm namely, LP-Relax(THR) which offers a guarantee that if a task set can be scheduled to meet deadlines by an optimal task assignment scheme that allows intra-type migration then LP-Relax(THR) meets deadlines as well with intra-type migration if given processors 1/THR as fast (referred to as speed competitive ratio) where THR <= 2/3.
Resumo:
WiDom is a previously proposed prioritized medium access control protocol for wireless channels. We present a modification to this protocol in order to improve its reliability. This modification has similarities with cooperative relaying schemes, but, in our protocol, all nodes can relay a carrier wave. The preliminary evaluation shows that, under transmission errors, a significant reduction on the number of failed tournaments can be achieved.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
In this work a mixed integer optimization linear programming (MILP) model was applied to mixed line rate (MLR) IP over WDM and IP over OTN over WDM (with and without OTN grooming) networks, with aim to reduce network energy consumption. Energy-aware and energy-aware & short-path routing techniques were used. Simulations were made based on a real network topology as well as on forecasts of traffic matrix based on statistical data from 2005 up to 2017. Energy aware routing optimization model on IPoWDM network, showed the lowest energy consumption along all years, and once compared with energy-aware & short-path routing, has led to an overall reduction in energy consumption up to 29%, expecting to save even more than shortest-path routing. © 2014 IEEE.
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
OBJECTIVE To evaluate the viability of a professional specialist in intra-hospital committees of organ and tissue donation for transplantation. METHODS Epidemiological, retrospective and cross-sectional study (2003-2011 and 2008-2012), which was performed using organ donation for transplants data in the state of Sao Paulo, Southeastern Brazil. Nine hospitals were evaluated (hospitals 1 to 9). Logistic regression was used to evaluate the differences in the number of brain death referrals and actual donors (dependent variables) after the professional specialist started work (independent variable) at the intra-hospital committee of organ and tissue donation for transplantation. To evaluate the hospital invoicing, the hourly wage of the doctor and registered nurse, according to the legislation of the Consolidation of Labor Laws, were calculated, as were the investment return and the time elapsed to do so. RESULTS Following the nursing specialist commencement on the committee, brain death referrals and the number of actual donors increased at hospital 2 (4.17 and 1.52, respectively). At hospital 7, the number of actual donors also increased from 0.005 to 1.54. In addition, after the nurse started working, hospital revenues increased by 190.0% (ranging 40.0% to 1.955%). The monthly cost for the nurse working 20 hours was US$397.97 while the doctor would cost US$3,526.67. The return on investment was 275% over the short term (0.36 years). CONCLUSIONS This paper showed that including a professional specialist in intra-hospital committees for organ and tissue donation for transplantation proved to be cost-effective. Further economic research in the area could contribute to the efficient public policy implementation of this organ and tissue harvesting model.
Resumo:
Electronics Letters Vol.38, nº 19