949 resultados para Time constraints


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We examine the board overlap among firms listed in Switzerland. Collusion, managerial entrenchment, and financial participation cannot explain it. The overlap appears to be induced by banks and by the accumulation of seats by the most popular directors. We also document that seat accumulation is negatively related to firm value, possibly because of the conflicts of interest that multiple directorships induce and the time constraints directors face. Contrary to popular beliefs, however, the directors of traded firms do not generally hold more than one mandate in other traded firms. They do hold multiple seats in non-traded firms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We address under what conditions a magma generated by partial melting at 100 km depth in the mantle wedge above a subduction zone can reach the crust in dikes before stalling. We also address under what conditions primitive basaltic magma (Mg # >60) can be delivered from this depth to the crust. We employ linear elastic fracture mechanics with magma solidification theory and perform a parametric sensitivity analysis. All dikes are initiated at a depth of 100 km in the thermal core of the wedge, and the Moho is fixed at 35 km depth. We consider a range of melt solidus temperatures (800-1100 degrees C), viscosities (10-100 Pa s), and densities (2400-2700 kg m(-3)). We also consider a range of host rock fracture toughness values (50-300 MPa m(1/2)) and dike lengths (2-5 km) and two thermal structures for the mantle wedge (1260 and 1400 degrees C at 100 km depth and 760 and 900 degrees C at 35 km depth). For the given parameter space, many dikes can reach the Moho in less than a few hundred hours, well within the time constraints provided by U series isotope disequilibria studies. Increasing the temperature in the mantle wedge, or increasing the dike length, allows additional dikes to propagate to the Moho. We conclude that some dikes with vertical lengths near their critical lengths and relatively high solidus temperatures will stall in the mantle before reaching the Moho, and these may be returned by corner flow to depths where they can melt under hydrous conditions. Thus, a chemical signature in arc lavas suggesting partial melting of slab basalts may be partly influenced by these recycled dikes. Alternatively, dikes with lengths well above their critical lengths can easily deliver primitive magmas to the crust, particularly if the mantle wedge is relatively hot. Dike transport remains a viable primary mechanism of magma ascent in convergent tectonic settings, but the potential for less rapid mechanisms making an important contribution increases as the mantle temperature at the Moho approaches the solidus temperature of the magma.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rapidly growing technical developments and working time constraints call for changes in trainee formation. In reality, trainees spend fewer hours in the hospital and face more difficulties in acquiring the required qualifications in order to work independently as a specialist. Simulation-based training is a potential solution. It offers the possibility to learn basic technical skills, repeatedly perform key steps in procedures and simulate challenging scenarios in team training. Patients are not at risk and learning curves can be shortened. Advanced learners are able to train rare complications. Senior faculty member's presence is key to assess and debrief effective simulation training. In the field of vascular access surgery, simulation models are available for open as well as endovascular procedures. In this narrative review, we describe the theory of simulation, present simulation models in vascular (access) surgery, discuss the possible benefits for patient safety and the difficulties of implementing simulation in training.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current perceptions about nurses’ roles and responsibilities are examined in this study, specifically relating to adolescent inpatient MHNs. Psychiatrists and psychiatric advanced practice registered nurses (APRNs), who work with MHNs and have also published scholarly psychiatric articles, were contacted to request their participation in an anonymous survey hosted by SurveyMonkey.com. This research was conducted to examine the stereotypes that exist against nurses within the health care profession itself, as compared to the pre-existing stereotypes displayed by the media’s view of nurses. Due to investigator time constraints, only six subjects participated in the study. Analysis of survey responses revealed four overarching themes. First, MHNs are a critical component of the health care team, emerging as rigorous, independent leaders, although still classified as female and sociable. Second, MHNs complete a wide range of daily activities, many of which go unnoticed by observers, often resulting in mixed feelings regarding whether MHNs are given the respect and recognition deserved. Third, MHNs treat each patient as a person with unique thoughts, feelings, and physical make-up. Fourth, MHNs act as a coordinator of care between various health professionals to provide the patient with a holistic approach to healing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although physician recommendation has been significantly associated with colorectal cancer screening (CRCS), it still does not motivate all patients to get CRCS. Although improved physician recommendation for CRCS has been shown to increase patient CRCS screening, questions remain about what elements of that discussion may lead to screening. The objective of this study is to describe patients' perceptions and interpretations about their physician's recommendation for CRCS during their annual wellness exam. A subset of patients (n=51) participating in a supplement study of a behavioral intervention trial designed to increase CRCS completed a follow-up, open-ended interview two to four weeks after their annual wellness visit. Using qualitative methods, transcripts of these interviews were analyzed. Findings suggest that most patients would follow their physician's recommendation for CRCS despite not engaging in much discussion. Patients may refrain from CRCS discussion because of a commitment to CRCS, awareness of screening guidelines, and trust in physician's honesty and beneficence. Yet many patients left their wellness exams with questions, refraining because of future plans to consult with their physicians, perceived time constraints or a lack of a patient-physician relationship. If patients are leaving their wellness exams with unanswered questions, interventions should prepare physicians for patient reticence, teaching physicians how to assure patients that CRCS is a primary care activity where all questions and concerns, including cost and scheduling, may be resolved.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multiple guidelines recommend debriefing of actual resuscitations to improve clinical performance. We implemented a novel standardized debriefing program using a Debriefing In Situ Conversation after Emergent Resuscitations Now (DISCERN) tool. Following the development of the evidence-based DISCERN tool, we conducted an observational study of all resuscitations (intubation, CPR, and/or defibrillation) at a pediatric emergency department (ED) over one year. Resuscitation interventions, patient survival, and physician team leader characteristics were analyzed as predictors for debriefing. Each debriefing's participants, time duration, and content were recorded. Thematic content of debriefings was categorized by framework approach into Team Emergency Assessment Measure (TEAM) elements. There were 241 resuscitations and 63 (26%) debriefings. A higher proportion of debriefings occurred after CPR (p<0.001) or ED death (p<0.001). Debriefing participants always included an attending and nurse; the median number of staff roles present was six. Median interval (from resuscitation end to start of debriefing) & debriefing durations were 33 (IQR 15,67) and 10 minutes (IQR 5,12), respectively. Common TEAM themes included co-operation/coordination (30%), communication (22%), and situational awareness (15%). Stated reasons for not debriefing included: unnecessary (78%), time constraints (19%), or other reasons (3%). Debriefings with the DISCERN tool usually involved higher acuity resuscitations, involved most of the indicated personnel, and lasted less than 10 minutes. This qualitative tool could be adapted to other settings. Future studies are needed to evaluate for potential impacts on education, quality improvement programming, and staff emotional well-being.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ocean Drilling Program Site 1002 in the Cariaco Basin was drilled in the final two days of Leg 165 with only a short transit remaining to the final port of San Juan, Puerto Rico. Because of severe time constraints, cores from only the first of the three long replicate holes (Hole 1002C) were opened at sea for visual description, and the shipboard sampling was restricted to the biostratigraphic examination of core catchers. The limited sampling and general scarcity of biostratigraphic datums within the late Quaternary interval covered by this greatly expanded hemipelagic sequence resulted in a very poorly defined age model for Site 1002 as reported in the Leg 165 Initial Reports volume of the Proceedings of the Ocean Drilling Program. Here, we present for the first time a new integrated stratigraphy for Site 1002 based on the standard of late Quaternary oxygen-isotope variations linked to a suite of refined biostratigraphic datums. These new data show that the sediment sequence recovered by Leg 165 in the Cariaco Basin is continuous and spans the time interval from 0 to ~580 ka, with a basal age roughly twice as old as initially suspected from the tentative shipboard identification of a single biostratigraphic datum. Lithologic subunits recognized at Site 1002 are here tied into this new stratigraphic framework, and temporal variations in major sediment components are reported. The biogenic carbonate, opal, and organic carbon contents of sediments in the Cariaco Basin tend to be high during interglacials, whereas the terrigenous contents of the sediments increase during glacials. Glacioeustatic variations in sea level are likely to exert a dominant control on these first-order variations in lithology, with glacial surface productivity and the nutrient content of waters in the Cariaco Basin affected by shoaling glacial sill depths, and glacial terrigenous inputs affected by narrowing of the inner shelf and increased proximity of direct riverine sources during sea-level lowstands.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The foraging distributions of 20 breeding emperor penguins were investigated at Pointe Géologie, Terre Adélie, Antarctica by using satellite telemetry in 2005 and 2006 during early and late winter, as well as during late spring and summer, corresponding to incubation, early chick-brooding, late chick-rearing and the adult pre-moult period, respectively. Dive depth records of three post-egg-laying females, two post-incubating males and four late chick-rearing adults were examined, as well as the horizontal space use by these birds. Foraging ranges of chick-provisioning penguins extended over the Antarctic shelf and were constricted by winter pack-ice. During spring ice break-up, the foraging ranges rarely exceeded the shelf slope, although seawater access was apparently almost unlimited. Winter females appeared constrained in their access to open water but used fissures in the sea ice and expanded their prey search effort by expanding the horizontal search component underwater. Birds in spring however, showed higher area-restricted-search than did birds in winter. Despite different seasonal foraging strategies, chick-rearing penguins exploited similar areas as indicated by both a high 'Area-Restricted-Search Index' and high 'Catch Per Unit Effort'. During pre-moult trips, emperor penguins ranged much farther offshore than breeding birds, which argues for particularly profitable oceanic feeding areas which can be exploited when the time constraints imposed by having to return to a central place to provision the chick no longer apply.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta Tesis aborda el diseño e implementación de aplicaciones en el campo de procesado de señal, utilizando como plataforma los dispositivos reconfigurables FPGA. Esta plataforma muestra una alta capacidad de lógica, e incorpora elementos orientados al procesado de señal, que unido a su relativamente bajo coste, la hacen ideal para el desarrollo de aplicaciones de procesado de señal cuando se requiere realizar un procesado intensivo y se buscan unas altas prestaciones. Sin embargo, el coste asociado al desarrollo en estas plataformas es elevado. Mientras que el aumento en la capacidad lógica de los dispositivos FPGA permite el desarrollo de sistemas completos, los requisitos de altas prestaciones obligan a que en muchas ocasiones se deban optimizar operadores a muy bajo nivel. Además de las restricciones temporales que imponen este tipo de aplicaciones, también tienen asociadas restricciones de área asociadas al dispositivo, lo que obliga a evaluar y verificar entre diferentes alternativas de implementación. El ciclo de diseño e implementación para estas aplicaciones se puede prolongar tanto, que es normal que aparezcan nuevos modelos de FPGA, con mayor capacidad y mayor velocidad, antes de completar el sistema, y que hagan a las restricciones utilizadas para el diseño del sistema inútiles. Para mejorar la productividad en el desarrollo de estas aplicaciones, y con ello acortar su ciclo de diseño, se pueden encontrar diferentes métodos. Esta Tesis se centra en la reutilización de componentes hardware previamente diseñados y verificados. Aunque los lenguajes HDL convencionales permiten reutilizar componentes ya definidos, se pueden realizar mejoras en la especificación que simplifiquen el proceso de incorporar componentes a nuevos diseños. Así, una primera parte de la Tesis se orientará a la especificación de diseños basada en componentes predefinidos. Esta especificación no sólo busca mejorar y simplificar el proceso de añadir componentes a una descripción, sino que también busca mejorar la calidad del diseño especificado, ofreciendo una mayor posibilidad de configuración e incluso la posibilidad de informar de características de la propia descripción. Reutilizar una componente ya descrito depende en gran medida de la información que se ofrezca para su integración en un sistema. En este sentido los HDLs convencionales únicamente proporcionan junto con la descripción del componente la interfaz de entrada/ salida y un conjunto de parámetros para su configuración, mientras que el resto de información requerida normalmente se acompaña mediante documentación externa. En la segunda parte de la Tesis se propondrán un conjunto de encapsulados cuya finalidad es incorporar junto con la propia descripción del componente, información que puede resultar útil para su integración en otros diseños. Incluyendo información de la implementación, ayuda a la configuración del componente, e incluso información de cómo configurar y conectar al componente para realizar una función. Finalmente se elegirá una aplicación clásica en el campo de procesado de señal, la transformada rápida de Fourier (FFT), y se utilizará como ejemplo de uso y aplicación, tanto de las posibilidades de especificación como de los encapsulados descritos. El objetivo del diseño realizado no sólo mostrará ejemplos de la especificación propuesta, sino que también se buscará obtener una implementación de calidad comparable con resultados de la literatura. Para ello, el diseño realizado se orientará a su implementación en FPGA, aprovechando tanto los elementos lógicos generalistas como elementos específicos de bajo nivel disponibles en estos dispositivos. Finalmente, la especificación de la FFT obtenida se utilizará para mostrar cómo incorporar en su interfaz información que ayude para su selección y configuración desde fases tempranas del ciclo de diseño. Abstract This PhD. thesis addresses the design and implementation of signal processing applications using reconfigurable FPGA platforms. This kind of platform exhibits high logic capability, incorporates dedicated signal processing elements and provides a low cost solution, which makes it ideal for the development of signal processing applications, where intensive data processing is required in order to obtain high performance. However, the cost associated to the hardware development on these platforms is high. While the increase in logic capacity of FPGA devices allows the development of complete systems, high-performance constraints require the optimization of operators at very low level. In addition to time constraints imposed by these applications, Area constraints are also applied related to the particular device, which force to evaluate and verify a design among different implementation alternatives. The design and implementation cycle for these applications can be tedious and long, being therefore normal that new FPGA models with a greater capacity and higher speed appear before completing the system implementation. Thus, the original constraints which guided the design of the system become useless. Different methods can be used to improve the productivity when developing these applications, and consequently shorten their design cycle. This PhD. Thesis focuses on the reuse of hardware components previously designed and verified. Although conventional HDLs allow the reuse of components already defined, their specification can be improved in order to simplify the process of incorporating new design components. Thus, a first part of the PhD. Thesis will focus on the specification of designs based on predefined components. This specification improves and simplifies the process of adding components to a description, but it also seeks to improve the quality of the design specified with better configuration options and even offering to report on features of the description. Hardware reuse of a component for its integration into a system largely depends on the information it offers. In this sense the conventional HDLs only provide together with the component description, the input/output interface and a set of parameters for its configuration, while other information is usually provided by external documentation. In the second part of the Thesis we will propose a formal way of encapsulation which aims to incorporate with the component description information that can be useful for its integration into other designs. This information will include features of the own implementation, but it will also support component configuration, and even information on how to configure and connect the component to carry out a function. Finally, the fast Fourier transform (FFT) will be chosen as a well-known signal processing application. It will be used as case study to illustrate the possibilities of proposed specification and encapsulation formalisms. The objective of the FFT design is not only to show practical examples of the proposed specification, but also to obtain an implementation of a quality comparable to scientific literature results. The design will focus its implementation on FPGA platforms, using general logic elements as base of the implementation, but also taking advantage of low-level specific elements available on these devices. Last, the specification of the obtained FFT will be used to show how to incorporate in its interface information to assist in the selection and configuration process early in the design cycle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the presence of a river flood, operators in charge of control must take decisions based on imperfect and incomplete sources of information (e.g., data provided by a limited number sensors) and partial knowledge about the structure and behavior of the river basin. This is a case of reasoning about a complex dynamic system with uncertainty and real-time constraints where bayesian networks can be used to provide an effective support. In this paper we describe a solution with spatio-temporal bayesian networks to be used in a context of emergencies produced by river floods. In the paper we describe first a set of types of causal relations for hydrologic processes with spatial and temporal references to represent the dynamics of the river basin. Then we describe how this was included in a computer system called SAIDA to provide assistance to operators in charge of control in a river basin. Finally the paper shows experimental results about the performance of the model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Machine and Statistical Learning techniques are used in almost all online advertisement systems. The problem of discovering which content is more demanded (e.g. receive more clicks) can be modeled as a multi-armed bandit problem. Contextual bandits (i.e., bandits with covariates, side information or associative reinforcement learning) associate, to each specific content, several features that define the “context” in which it appears (e.g. user, web page, time, region). This problem can be studied in the stochastic/statistical setting by means of the conditional probability paradigm using the Bayes’ theorem. However, for very large contextual information and/or real-time constraints, the exact calculation of the Bayes’ rule is computationally infeasible. In this article, we present a method that is able to handle large contextual information for learning in contextual-bandits problems. This method was tested in the Challenge on Yahoo! dataset at ICML2012’s Workshop “new Challenges for Exploration & Exploitation 3”, obtaining the second place. Its basic exploration policy is deterministic in the sense that for the same input data (as a time-series) the same results are obtained. We address the deterministic exploration vs. exploitation issue, explaining the way in which the proposed method deterministically finds an effective dynamic trade-off based solely in the input-data, in contrast to other methods that use a random number generator.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.