47 resultados para Minimal-complexity classifier
Resumo:
The main objective of the current research was to search the optimum method to segregate the most frequent color commercial quality classes of tobacco leaves (c.v. "Virginia"). These color classes cover the whole continuous color scale, between "Pale Lemon" and "Oxidated Brown". With the usual expert classification there exists a significant level of uncertainty . Within this research, several methods for data discrimination were tested, in order to solve uncertainty. Classification errors below 5% were obtained with this proposed classifier along two different seasons (1994&1995).
Resumo:
Along the recent years, several moving object detection strategies by non-parametric background-foreground modeling have been proposed. To combine both models and to obtain the probability of a pixel to belong to the foreground, these strategies make use of Bayesian classifiers. However, these classifiers do not allow to take advantage of additional prior information at different pixels. So, we propose a novel and efficient alternative Bayesian classifier that is suitable for this kind of strategies and that allows the use of whatever prior information. Additionally, we present an effective method to dynamically estimate prior probability from the result of a particle filter-based tracking strategy.
Resumo:
The influence of CP content and ingredient complexity, feed form, and duration of feeding of the Phase I diets on growth performance and total tract apparent digestibility -TTAD- of energy and nutrients was studied in Iberian pigs weaned at 28 d of age. There were 12 dietary treatments with 2 type of feeds -high-quality, HQ; and low-quality, LQ-, 2 feed forms -pellets vs. mash-, and 3 durations -7, 14, and 21 d- of supply of the Phase I diets.
Resumo:
The Semantics Difficulty Model (SDM) is a model that measures the difficult of introducing semantics technology into a company. SDM manages three descriptions of stages, which we will refer to as ?snapshots?: a company semantic snapshot, data snapshot and semantic application snapshot. Understanding a priory the complexity of introducing semantics into a company is important because it allows the organization to take early decisions, thus saving time and money, mitigating risks and improving innovation, time to market and productivity. SDM works by measuring the distance between each initial snapshot and its reference models (the company semantic snapshots reference model, data snapshots reference model, and the semantic application snapshots reference model) with Euclidian distances. The difficulty level will be "not at all difficult" when the distance is small, and becomes "extremely difficult" when the the distance is large. SDM has been tested experimentally with 2000 simulated companies with arrangements and several initial stages. The output is measured by five linguistic values: "not at all difficult, slightly difficult, averagely difficult, very difficult and extremely difficult". As the preliminary results of our SDM simulation model indicate, transforming a search application into integrated data from different sources with semantics is a "slightly difficult", in contrast with data and opinion extraction applications for which it is "very difficult".
Resumo:
Electronic devices endowed with camera platforms require new and powerful machine vision applications, which commonly include moving object detection strategies. To obtain high-quality results, the most recent strategies estimate nonparametrically background and foreground models and combine them by means of a Bayesian classifier. However, typical classifiers are limited by the use of constant prior values and they do not allow the inclusion of additional spatiodependent prior information. In this Letter, we propose an alternative Bayesian classifier that, unlike those reported before, allows the use of additional prior information obtained from any source and depending on the spatial position of each pixel.
Resumo:
Mode switches are used to partition the system’s behavior into different modes to reduce the complexity of large embedded systems. Such systems operate in multiple modes in which each one corresponds to a specific application scenario; these are called Multi-Mode Systems (MMS). A different piece of software is normally executed for each mode. At any given time, the system can be in one of the predefined modes and then be switched to another as a result of a certain condition. A mode switch mechanism (or mode change protocol) is used to shift the system from one mode to another at run-time. In this thesis we have used a hierarchical scheduling framework to implement a multi-mode system called Multi-Mode Hierarchical Scheduling Framework (MMHSF). A two-level Hierarchical Scheduling Framework (HSF) has already been implemented in an open source real-time operating system, FreeRTOS, to support temporal isolation among real-time components. The main contribution of this thesis is the extension of the HSF featuring a multimode feature with an emphasis on making minimal changes in the underlying operating system (FreeRTOS) and its HSF implementation. Our implementation uses fixed-priority preemptive scheduling at both local and global scheduling levels and idling periodic servers. It also now supports different modes of the system which can be switched at run-time. Each subsystem and task exhibit different timing attributes according to mode, and upon a Mode Change Request (MCR) the task-set and timing interfaces of the entire system (including subsystems and tasks) undergo a change. A Mode Change Protocol specifies precisely how the system-mode will be changed. However, an application may not only need to change a mode but also a different mode change protocol semantic. For example, the mode change from normal to shutdown can allow all the tasks to be completed before the mode itself is changed, while changing a mode from normal to emergency may require aborting all tasks instantly. In our work, both the system mode and the mode change protocol can be changed at run-time. We have implemented three different mode change protocols to switch from one mode to another: the Suspend/resume protocol, the Abort protocol, and the Complete protocol. These protocols increase the flexibility of the system, allowing users to select the way they want to switch to a new mode. The implementation of MMHSF is tested and evaluated on an AVR-based 32 bit board EVK1100 with an AVR32UC3A0512 micro-controller. We have tested the behavior of each system mode and for each mode change protocol. We also provide the results for the performance measures of all mode change protocols in the thesis. RESUMEN Los conmutadores de modo son usados para particionar el comportamiento del sistema en diferentes modos, reduciendo así la complejidad de grandes sistemas empotrados. Estos sistemas tienen multiples modos de operación, cada uno de ellos correspondiente a distintos escenarios y para distintas aplicaciones; son llamados Sistemas Multimodales (o en inglés “Multi-Mode Systems” o MMS). Normalmente cada modo ejecuta una parte de código distinto. En un momento dado el sistema, que está en un modo concreto, puede ser cambiado a otro modo distinto como resultado de alguna condicion impuesta previamente. Los mecanismos de cambio de modo (o protocolos de cambio de modo) son usados para mover el sistema de un modo a otro durante el tiempo de ejecución. En este trabajo se ha usado un modelo de sistema operativo para implementar un sistema multimodo llamado MMHSF, siglas en inglés correspondientes a (Multi-Mode Hierarchical Scheduling Framework). Este sistema está basado en el HSF (Hierarchical Scheduling Framework), un modelo de sistema operativo con jerarquía de dos niveles, implementado en un sistema operativo en tiempo real de libre distribución llamado FreeRTOS, capaz de permitir el aislamiento temporal entre componentes. La principal contribución de este trabajo es la ampliación del HSF convirtiendolo en un sistema multimodo realizando los cambios mínimos necesarios sobre el sistema operativo FreeRTOS y la implementación ya existente del HSF. Esta implementación usa un sistema de planificación de prioridad fija para ambos niveles de jerarquía, ocupando el tiempo entre tareas con un “modo reposo”. Además el sistema es capaz de cambiar de un modo a otro en tiempo de ejecución. Cada subsistema y tarea son capaces de tener distintos atributos de tiempo (prioridad, periodo y tiempo de ejecución) en función del modo. Bajo una demanda de cambio de modo (Mode Change Request MCR) se puede variar el set de tareas en ejecución, así como los atributos de los servidores y las tareas. Un protocolo de cambio de modo espeficica precisamente cómo será cambiado el sistema de un modo a otro. Sin embargo una aplicación puede requerir no solo un cambio de modo, sino que lo haga de una forma especifica. Por ejemplo, el cambio de modo de “normal” a “apagado” puede permitir a las tareas en ejecución ser finalizadas antes de que se complete la transición, pero sin embargo el cambio de “normal” a “emergencia” puede requerir abortar todas las tareas instantaneamente. En este trabajo ambas características, tanto el modo como el protocolo de cambio, pueden ser cambiadas en tiempo de ejecución, pero deben ser previamente definidas por el desarrollador. Han sido definidos tres protocolos de cambios: el protocolo “suspender/continuar”, protocolo “abortar” y el protocolo “completar”. Estos protocolos incrementan la flexibilidad del sistema, permitiendo al usuario seleccionar de que forma quieren cambiar hacia el nuevo modo. La implementación del MMHSF ha sido testada y evaluada en una placa AVR EVK1100, con un micro-controlador AVR32UC3A0. Se ha comprobado el comportamiento de los distintos modos para los distintos protocolos, definidos previamente. Como resultado se proporcionan las medidades de rendimiento de los distintos protocolos de cambio de modo.
Resumo:
Multi-label classification (MLC) is the supervised learning problem where an instance may be associated with multiple labels. Modeling dependencies between labels allows MLC methods to improve their performance at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies. On the one hand, the original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors down the chain. On the other hand, a recent Bayes-optimal method improves the performance, but is computationally intractable in practice. Here we present a novel double-Monte Carlo scheme (M2CC), both for finding a good chain sequence and performing efficient inference. The M2CC algorithm remains tractable for high-dimensional data sets and obtains the best overall accuracy, as shown on several real data sets with input dimension as high as 1449 and up to 103 labels.
Diseño de algoritmos de guerra electrónica y radar para su implementación en sistemas de tiempo real
Resumo:
Esta tesis se centra en el estudio y desarrollo de algoritmos de guerra electrónica {electronic warfare, EW) y radar para su implementación en sistemas de tiempo real. La llegada de los sistemas de radio, radar y navegación al terreno militar llevó al desarrollo de tecnologías para combatirlos. Así, el objetivo de los sistemas de guerra electrónica es el control del espectro electomagnético. Una de la funciones de la guerra electrónica es la inteligencia de señales {signals intelligence, SIGINT), cuya labor es detectar, almacenar, analizar, clasificar y localizar la procedencia de todo tipo de señales presentes en el espectro. El subsistema de inteligencia de señales dedicado a las señales radar es la inteligencia electrónica {electronic intelligence, ELINT). Un sistema de tiempo real es aquel cuyo factor de mérito depende tanto del resultado proporcionado como del tiempo en que se da dicho resultado. Los sistemas radar y de guerra electrónica tienen que proporcionar información lo más rápido posible y de forma continua, por lo que pueden encuadrarse dentro de los sistemas de tiempo real. La introducción de restricciones de tiempo real implica un proceso de realimentación entre el diseño del algoritmo y su implementación en plataformas “hardware”. Las restricciones de tiempo real son dos: latencia y área de la implementación. En esta tesis, todos los algoritmos presentados se han implementado en plataformas del tipo field programmable gate array (FPGA), ya que presentan un buen compromiso entre velocidad, coste total, consumo y reconfigurabilidad. La primera parte de la tesis está centrada en el estudio de diferentes subsistemas de un equipo ELINT: detección de señales mediante un detector canalizado, extracción de los parámetros de pulsos radar, clasificación de modulaciones y localization pasiva. La transformada discreta de Fourier {discrete Fourier transform, DFT) es un detector y estimador de frecuencia quasi-óptimo para señales de banda estrecha en presencia de ruido blanco. El desarrollo de algoritmos eficientes para el cálculo de la DFT, conocidos como fast Fourier transform (FFT), han situado a la FFT como el algoritmo más utilizado para la detección de señales de banda estrecha con requisitos de tiempo real. Así, se ha diseñado e implementado un algoritmo de detección y análisis espectral para su implementación en tiempo real. Los parámetros más característicos de un pulso radar son su tiempo de llegada y anchura de pulso. Se ha diseñado e implementado un algoritmo capaz de extraer dichos parámetros. Este algoritmo se puede utilizar con varios propósitos: realizar un reconocimiento genérico del radar que transmite dicha señal, localizar la posición de dicho radar o bien puede utilizarse como la parte de preprocesado de un clasificador automático de modulaciones. La clasificación automática de modulaciones es extremadamente complicada en entornos no cooperativos. Un clasificador automático de modulaciones se divide en dos partes: preprocesado y el algoritmo de clasificación. Los algoritmos de clasificación basados en parámetros representativos calculan diferentes estadísticos de la señal de entrada y la clasifican procesando dichos estadísticos. Los algoritmos de localization pueden dividirse en dos tipos: triangulación y sistemas cuadráticos. En los algoritmos basados en triangulación, la posición se estima mediante la intersección de las rectas proporcionadas por la dirección de llegada de la señal. En cambio, en los sistemas cuadráticos, la posición se estima mediante la intersección de superficies con igual diferencia en el tiempo de llegada (time difference of arrival, TDOA) o diferencia en la frecuencia de llegada (frequency difference of arrival, FDOA). Aunque sólo se ha implementado la estimación del TDOA y FDOA mediante la diferencia de tiempos de llegada y diferencia de frecuencias, se presentan estudios exhaustivos sobre los diferentes algoritmos para la estimación del TDOA, FDOA y localización pasiva mediante TDOA-FDOA. La segunda parte de la tesis está dedicada al diseño e implementación filtros discretos de respuesta finita (finite impulse response, FIR) para dos aplicaciones radar: phased array de banda ancha mediante filtros retardadores (true-time delay, TTD) y la mejora del alcance de un radar sin modificar el “hardware” existente para que la solución sea de bajo coste. La operación de un phased array de banda ancha mediante desfasadores no es factible ya que el retardo temporal no puede aproximarse mediante un desfase. La solución adoptada e implementada consiste en sustituir los desfasadores por filtros digitales con retardo programable. El máximo alcance de un radar depende de la relación señal a ruido promedio en el receptor. La relación señal a ruido depende a su vez de la energía de señal transmitida, potencia multiplicado por la anchura de pulso. Cualquier cambio hardware que se realice conlleva un alto coste. La solución que se propone es utilizar una técnica de compresión de pulsos, consistente en introducir una modulación interna a la señal, desacoplando alcance y resolución. ABSTRACT This thesis is focused on the study and development of electronic warfare (EW) and radar algorithms for real-time implementation. The arrival of radar, radio and navigation systems to the military sphere led to the development of technologies to fight them. Therefore, the objective of EW systems is the control of the electromagnetic spectrum. Signals Intelligence (SIGINT) is one of the EW functions, whose mission is to detect, collect, analyze, classify and locate all kind of electromagnetic emissions. Electronic intelligence (ELINT) is the SIGINT subsystem that is devoted to radar signals. A real-time system is the one whose correctness depends not only on the provided result but also on the time in which this result is obtained. Radar and EW systems must provide information as fast as possible on a continuous basis and they can be defined as real-time systems. The introduction of real-time constraints implies a feedback process between the design of the algorithms and their hardware implementation. Moreover, a real-time constraint consists of two parameters: Latency and area of the implementation. All the algorithms in this thesis have been implemented on field programmable gate array (FPGAs) platforms, presenting a trade-off among performance, cost, power consumption and reconfigurability. The first part of the thesis is related to the study of different key subsystems of an ELINT equipment: Signal detection with channelized receivers, pulse parameter extraction, modulation classification for radar signals and passive location algorithms. The discrete Fourier transform (DFT) is a nearly optimal detector and frequency estimator for narrow-band signals buried in white noise. The introduction of fast algorithms to calculate the DFT, known as FFT, reduces the complexity and the processing time of the DFT computation. These properties have placed the FFT as one the most conventional methods for narrow-band signal detection for real-time applications. An algorithm for real-time spectral analysis for user-defined bandwidth, instantaneous dynamic range and resolution is presented. The most characteristic parameters of a pulsed signal are its time of arrival (TOA) and the pulse width (PW). The estimation of these basic parameters is a fundamental task in an ELINT equipment. A basic pulse parameter extractor (PPE) that is able to estimate all these parameters is designed and implemented. The PPE may be useful to perform a generic radar recognition process, perform an emitter location technique and can be used as the preprocessing part of an automatic modulation classifier (AMC). Modulation classification is a difficult task in a non-cooperative environment. An AMC consists of two parts: Signal preprocessing and the classification algorithm itself. Featurebased algorithms obtain different characteristics or features of the input signals. Once these features are extracted, the classification is carried out by processing these features. A feature based-AMC for pulsed radar signals with real-time requirements is studied, designed and implemented. Emitter passive location techniques can be divided into two classes: Triangulation systems, in which the emitter location is estimated with the intersection of the different lines of bearing created from the estimated directions of arrival, and quadratic position-fixing systems, in which the position is estimated through the intersection of iso-time difference of arrival (TDOA) or iso-frequency difference of arrival (FDOA) quadratic surfaces. Although TDOA and FDOA are only implemented with time of arrival and frequency differences, different algorithms for TDOA, FDOA and position estimation are studied and analyzed. The second part is dedicated to FIR filter design and implementation for two different radar applications: Wideband phased arrays with true-time delay (TTD) filters and the range improvement of an operative radar with no hardware changes to minimize costs. Wideband operation of phased arrays is unfeasible because time delays cannot be approximated by phase shifts. The presented solution is based on the substitution of the phase shifters by FIR discrete delay filters. The maximum range of a radar depends on the averaged signal to noise ratio (SNR) at the receiver. Among other factors, the SNR depends on the transmitted signal energy that is power times pulse width. Any possible hardware change implies high costs. The proposed solution lies in the use of a signal processing technique known as pulse compression, which consists of introducing an internal modulation within the pulse width, decoupling range and resolution.
Resumo:
Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance – at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest-performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for finding a good chain sequence and performing efficient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets.
Resumo:
Valoración de la transferencia temporal de los modelos de distribución de especies para su aplicación en nuestros días utilizando datos paleobotánicos Corilus avellana y Alnus glutinosa.
Resumo:
Esta investigación recoge un cúmulo de intereses en torno a un modo de generar arquitectura muy específico: La producción de objetos con una forma subyacente no apriorística. Los conocimientos expuestos se apoyan en condiciones del pensamiento reciente que impulsan la ilusión por alimentar la fuente creativa de la arquitectura con otros campos del saber. Los tiempos del conocimiento animista sensible y el conocimiento objetivo de carácter científico son correlativos en la historia pero casi nunca han sido sincrónicos. Representa asimismo un intento por aunar los dos tipos de conocimiento retomando la inercia que ya se presentía a comienzos del siglo XX. Se trata por tanto, de un ensayo sobre la posible anulación de la contraposición entre estos dos mundos para pasar a una complementariedad entre ambos en una sola visión conjunta compartida. Como meta final de esta investigación se presenta el desarrollo de un sistema crítico de análisis para los objetos arquitectónicos que permita una diferenciación entre aquellos que responden a los problemas de manera completa y sincera y aquellos otros que esconden, bajo una superficie consensuada, la falta de un método resolutivo de la complejidad en el presente creativo. La Investigación observa tres grupos de conocimiento diferenciados agrupados en sus capítulos correspondientes: El primer capítulo versa sobre el Impulso Creador. En él se define la necesidad de crear un marco para el individuo creador, aquel que independientemente de las fuerzas sociales del momento presiente que existe algo más allá que está sin resolver. Denominamos aquí “creador rebelde” a un tipo de personaje reconocible a lo largo de la Historia como aquel capaz de reconocer los cambios que ese operan en su presente y que utiliza para descubrir lo nuevo y acercarse algo al origen creativo. En el momento actual ese tipo de personaje es el que intuye o ya ha intuido hace tiempo la existencia de una complejidad creciente no obviable en el pensamiento de este tiempo. El segundo capítulo desarrolla algunas Propiedades de Sistemas de actuación creativa. En él se muestra una investigación que desarrolla un marco de conocimientos científicos muy específicos de nuestro tiempo que la arquitectura, de momento, no ha absorbido ni refleja de manera directa en su manera de crear. Son temas de presencia casi ya mundana en la sociedad pero que se resisten a ser incluidos en los procesos creativos como parte de la conciencia. La mayoría de ellos hablan de precisión, órdenes invisibles, propiedades de la materia o la energía tratados de manera objetiva y apolítica. La meta final supone el acercamiento e incorporación de estos conceptos y propiedades a nuestro mundo sensible unificándolos indisociablemente bajo un solo punto de vista. El último capítulo versa sobre la Complejidad y su capacidad de reducción a lo esencial. Aquí se muestran, a modo de conclusiones, la introducción de varios conceptos para el desarrollo de un sistema crítico hacia la arquitectura de nuestro tiempo. Entre ellos, el de Complejidad Esencial, definido como aquella de carácter inevitable a la hora de responder la arquitectura a los problemas y solicitaciones crecientes a los que se enfrenta en el presente. La Tesis mantiene la importancia de informar sobre la imposibilidad en el estado actual de las cosas de responder de manera sincera con soluciones de carácter simplista y la necesidad, por tanto, de soluciones necesarias de carácter complejo. En este sentido se define asimismo el concepto de Forma Subyacente como herramienta crítica para poder evaluar la respuesta de cada arquitectura y poder tener un sistema y visión crítica sobre lo que es un objeto consistente frente a la situación a la que se enfrenta. Dicha forma subyacente se define como aquella manera de entender conjuntamente y de manera sincrónica aquello que percibimos de manera sensible inseparable de las fuerzas ocultas, creativas, tecnológicas, materiales y energéticas que sustentan la definición y entendimiento de cualquier objeto construido. ABSTRACT This research includes a cluster of interests around a specific way to generate architecture: The production of objects without an a priori underlying form. The knowledge presented is based on current conditions of thought promoting the illusion to feed the creative source of architecture with other fields of knowledge. The sensible animist knowledge and objective scientific knowledge are correlative in history but have rarely been synchronous. This research is also an attempt to combine both types of knowledge to regain the inertia already sensed in the early twentieth century. It is therefore an essay on the annulment of the opposition between these two worlds to move towards complementarities of both in a single shared vision. The ultimate goal of this research is to present the development of a critical analysis system for architectural objects that allows differentiation between those who respond to the problems sincerely and those who hide under an agreed appearance, the lack of a method for solving the complexity of the creative present. The research observes three distinct groups of knowledge contained in their respective chapters: The first chapter deals with the Creative Impulse. In it is defined the need to create a framework for the creative individual who, regardless of the current social forces, forebodes that there is something hidden beyond which is still unresolved. We define the "rebel creator" as a kind of person existing throughout history who is able to recognize the changes operating in its present and use them to discover something new and get closer to the origin of creation. At present, this type of character is the one who intuits the existence of a non obviable increasing complexity in society and thought. The second chapter presents some systems, and their properties, for creative performance. It describes the development of a framework composed of current scientific knowledge that architecture has not yet absorbed or reflected directly in her procedures. These are issues of common presence in society but are still reluctant to be included in the creative processes even if they already belong to the collective consciousness. Most of them talk about accuracy, invisible orders, properties of matter and energy, always treated from an objective and apolitical perspective. The ultimate goal pursues the approach and incorporation of these concepts and properties to the sensible world, inextricably unifying all under a single point of view. The last chapter deals with complexity and the ability to reduce it to the essentials. Here we show, as a conclusion, the introduction of several concepts to develop a critical approach to analyzing the architecture of our time. Among them, the concept of Essential Complexity, defined as one that inevitably arises when architecture responds to the increasing stresses that faces today. The thesis maintains the importance of reporting, in the present state of things, the impossibility to respond openly with simplistic solutions and, therefore, the need for solutions to complex character. In this sense, the concept of Underlying Form is defined as a critical tool to evaluate the response of each architecture and possess a critical system to clarify what is an consistent object facing a certain situation. The underlying form is then defined as a way to synchronously understand what we perceive sensitively inseparable from the hidden forces of creative, technological, material and energetic character that support the definition and understanding of any constructed object.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
This paper presents a novel robust visual tracking framework, based on discriminative method, for Unmanned Aerial Vehicles (UAVs) to track an arbitrary 2D/3D target at real-time frame rates, that is called the Adaptive Multi-Classifier Multi-Resolution (AMCMR) framework. In this framework, adaptive Multiple Classifiers (MC) are updated in the (k-1)th frame-based Multiple Resolutions (MR) structure with compressed positive and negative samples, and then applied them in the kth frame-based Multiple Resolutions (MR) structure to detect the current target. The sample importance has been integrated into this framework to improve the tracking stability and accuracy. The performance of this framework was evaluated with the Ground Truth (GT) in different types of public image databases and real flight-based aerial image datasets firstly, then the framework has been applied in the UAV to inspect the Offshore Floating Platform (OFP). The evaluation and application results show that this framework is more robust, efficient and accurate against the existing state-of-art trackers, overcoming the problems generated by the challenging situations such as obvious appearance change, variant illumination, partial/full target occlusion, blur motion, rapid pose variation and onboard mechanical vibration, among others. To our best knowledge, this is the first work to present this framework for solving the online learning and tracking freewill 2D/3D target problems, and applied it in the UAVs.
Resumo:
PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.
Resumo:
La importantísima expansión urbana que sufren las principales ciudades de los países en vías de desarrollo, es el mayor reto que afronta la habitabilidad a nivel mundial. Dentro de la teoría general de la Habitabilidad Básica (HaB-ETSAM-UPM, 1995) la ordenación del territorio y el planeamiento urbanístico son las herramientas determinantes para orientar coherentemente los procesos de urbanización, como se reconoce también desde las principales esferas técnicas a nivel internacional. Pero tales herramientas deben enfocarse a una construcción eficiente del territorio, actuando desde una aproximación multidisciplinar, flexible y directa, que incida en las prioridades específicas de cada contexto. Para ello, resulta fundamental comprender a fondo las realidades específicas de estos ámbitos. La ciudad es un fenómeno complejo en esencia. El tejido construido, en constante proceso de cambio, es el caparazón visible que alberga una maravillosa mezcla entrelazada de espacios, funciones, flujos, personas.... Cada ciudad, diferente y única, se integra en su medio, se adapta a geografías, contextos y climas distintos, evoluciona según dinámicas propias, en incomprensibles (o casi) procesos evolutivos. El estudio de la ciudad, supone siempre una simplificación de la misma. La realidad urbana, por detallado que sea nuestro análisis, siempre contendrá indescifrables relaciones que se nos escapan. En cambio, necesitamos de métodos analíticos que nos ayuden a comprender algo esa complejidad. Acercarnos en ese análisis, es un paso previo fundamental para la formulación de respuestas. En este plano, de avance en la comprensión del hecho urbano, se sitúa este trabajo. Se pone el acento en el enfoque cuantitativo, profundizando en datos básicos concretos, siempre aceptando de partida que esta información es una componente mínima, pero esperamos que sustantiva, de un fenómeno inabordable. Y es esta búsqueda de comprensión material y cuantitativa de la ciudad, el objetivo esencial de la investigación. Se pretende proporcionar una base detallada de aquéllos aspectos fundamentales, que pueden ser medidos en los entornos urbanos y que nos proporcionan información útil para el diagnóstico y las propuestas. Para ello, se aportan rangos y referencias deseables, a través de una herramienta para la comprensión y la valoración de cada contexto, la Matriz de Indicadores. Esta herramienta se concibe desde la reflexión a la aplicación práctica, a la utilidad directa, al aporte concreto para quien pueda servir. Ésta es la voluntad decidida con la que se aborda este trabajo, centrado en los entornos urbanos donde el aporte técnico es prioritario: la Ciudad Informal. La Ciudad Informal, entendida aquí como aquélla que se desarrolla sin los medios suficientes (técnicos, económicos e institucionales) que proporciona la planificación, aquélla por donde la habitabilidad precaria se extiende. Es la ciudad que predomina en los países en vías de desarrollo, en los contextos de bajos recursos, allí donde, precisamente, se concentran los principales déficits y necesidades a nivel global. El enfoque nace de la teoría de la Habitabilidad Básica, de la definición de mínimos posibles para, desde allí, construir el espacio necesario para el desarrollo humano. Éste es el ámbito genérico objeto del trabajo que, a su vez, se nutre, de forma muy importante, de la experiencia directa en la ciudad de Makeni, en Sierra Leona. Esta ciudad nos sirve de prototipo experimental en un doble sentido. Por un lado, sirve como espacio empírico en el que chequear la metodología de valoración cuantitativa; y, por otro, el conocimiento de esta ciudad de tamaño medio africana, que se ha ido adquiriendo en los últimos cinco años, es una base directa para el desarrollo teórico de la propia metodología, que ayuda a atisbar lo esencial en contextos similares. El encaje de todo este recorrido se ha articulado desde una experiencia académica que, como docente, he coordinado directa e intensamente. Experiencia muy enriquecedora, que ha sumado muchas manos y mucho aprendizaje en este tiempo. Teoría y práctica en planeamiento urbano se alternan en el trabajo, nutriéndose la una de la otra y a la inversa. Trabajo que nace desde la pasión por la ciudad y el urbanismo. Desde la búsqueda por comprender y desde la vocación de actuar, de intentar mejorar y hacer más habitables los entornos urbanos. Especialmente allí donde las dificultades se agolpan y el camino se alarga, se llena de polvo. Acumular preguntas a cada paso. Cada vez más preguntas. Las respuestas, si existen, aparecen entrelazadas en dinámicas indescifrables de las que queremos formar parte. Fundirnos por momentos en la misma búsqueda, acompañarla. Sentirnos cerca de quiénes comienzan de cero casi cada día. Y otra vez, arrancar. Y compartir, desde el conocimiento, si acaso es que se puede. Y la ciudad. Brutal, imponente, asfixiante, maravillosa, imposible. Creación colectiva insuperable, de energías sumadas que se cosen sin patrón aparente. O sin más razón que la del propio pulso de la vida. Así se siente Makeni. ABSTRACT The important urban growth suffering major cities of developing countries, is the biggest challenge facing the global habitability. Within the general theory of Basic Habitability (HAB-ETSAM-UPM, 1995) spatial planning and urban planning are the crucial tools to consistently guide the urbanization process, as also recognized from the main technical areas worldwide. But such tools should focus on an efficient construction of the territory, working from a multidisciplinary, flexible and direct approach, that affects the specific priorities of each context. To do this, it is essential to thoroughly understand the specific realities of these areas. The city is essentially a complex phenomenon. The urban fabric in constant flux, is the visible shell that houses a wonderful mixture interlocking spaces, functions, flows, people.... Every city, different and unique, is integrated into its environment, adapted to geographies, contexts and climates, it evolves according to its own dynamics, along (almost) incomprehensible evolutionary processes. The study of the city, is always a simplification of it. The urban reality, even studied from a detailed analysis, always contain undecipherable relationships that escape us. Instead, we need analytical methods that help us understand something that complexity. Moving forward in this analysis is an essential first step in formulating responses. At this level, progressing in understand the urban reality, is where this work is located. The emphasis on the quantitative approach is placed, delving into specific basic data, starting always accepting that this information is just a minimal component, but we hope that substantive, of an intractable phenomenon. And it is this search for materials and quantitative understanding of the city, the main objective of the research. It is intended to provide a detailed basis of those fundamental aspects that can be measured in urban environments that provide us useful information for the diagnosis and proposals. To do this, desirable ranges and references are provided, through a tool for understanding and appreciation of each context, the Indicator Matrix. This tool is conceived from reflection to practical application, to a direct utility, concrete contribution to who can serve. This is the firm resolve with which this work is addressed, centered in urban environments where the technical contribution is a priority: the Informal City. The Informal City, understood here as the urban areas developed without the sufficient resources (technical, economic and institutional) which planning provides, where it is extended the precarious habitability. It is the city that prevails in developing countries, in low-resource settings, where, precisely, the main gaps and needs in the global context are concentrated. The approach stems from the theory of Basic Habitability, the definition of possible minimum, to build the necessary space for human development. This is the generic scope of the work object, that is also based in the direct experience in the town of Makeni, Sierra Leone. This city serves as an experimental prototype in two ways. On the one hand, it serves as a space where empirically check the quantitative assessment methodology; and, secondly, the knowledge of this African city of medium size, which has been acquired in the last five years, is a direct basis for the theoretical development of the methodology itself, which helps to glimpse the essence in similar contexts. The engagement of this whole journey has been articulated from an academic experience, directly and intensely coordinated as teacher. Enriching experience that has added many hands and much learning at this time. Theory and practice in urban planning are mixed at work, enriching the one of the other and vice versa. Work is born from the passion for the city and urbanism. From the search for understanding and from the vocation of acting, trying to improve and make more livable urban environments. Especially where the difficulties are crowded and the road is so long, full of dust. To accumulate questions at every turn. More and more questions. The answers, if do exist, appears inside indecipherable dynamics in which we want to be involved. Merge at times in the same search. Feel close to those who start from scratch almost every day. And again, move forward. And share, from knowledge, if possible. And the city. Brutal, impressive, suffocating, wonderful, impossible. Unsurpassed collective creation, combined energy mix that are sewn with no apparent pattern. Or for no reason other than the pulse of life itself. As it feels the city of Makeni.