922 resultados para multi-column process


Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article proposes a MAS architecture for network diagnosis under uncertainty. Network diagnosis is divided into two inference processes: hypothesis generation and hypothesis confirmation. The first process is distributed among several agents based on a MSBN, while the second one is carried out by agents using semantic reasoning. A diagnosis ontology has been defined in order to combine both inference processes. To drive the deliberation process, dynamic data about the influence of observations are taken during diagnosis process. In order to achieve quick and reliable diagnoses, this influence is used to choose the best action to perform. This approach has been evaluated in a P2P video streaming scenario. Computational and time improvements are highlight as conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most used methods in rapidprototyping is Fused Deposition Modeling (FDM), which provides components with a reasonable strength in plastic materials such as ABS and has a low environmental impact. However, the FDM process exhibits low levels of surface finishing, difficulty in getting complex and/or small geometries and low consistency in “slim” elements of the parts. Furthermore, “cantilever” elements need large material structures to be supported. The solution of these deficiencies requires a comprehensive review of the three-dimensional part design to enhance advantages and performances of FDM and reduce their constraints. As a key feature of this redesign a novel method of construction by assembling parts with structuraladhesive joints is proposed. These adhesive joints should be designed specifically to fit the plastic substrate and the FDM manufacturing technology. To achieve this, the most suitable structuraladhesiveselection is firstly required. Therefore, the present work analyzes five different families of adhesives (cyanoacrylate, polyurethane, epoxy, acrylic and silicone), and, by means of the application of technical multi-criteria decision analysis based on the analytic hierarchy process (AHP), to select the structuraladhesive that better conjugates mechanical benefits and adaptation to the FDM manufacturing process

Relevância:

30.00% 30.00%

Publicador:

Resumo:

These slides present several 3-D reconstruction methods to obtain the geometric structure of a scene that is viewed by multiple cameras. We focus on the combination of the geometric modeling in the image formation process with the use of standard optimization tools to estimate the characteristic parameters that describe the geometry of the 3-D scene. In particular, linear, non-linear and robust methods to estimate the monocular and epipolar geometry are introduced as cornerstones to generate 3-D reconstructions with multiple cameras. Some examples of systems that use this constructive strategy are Bundler, PhotoSynth, VideoSurfing, etc., which are able to obtain 3-D reconstructions with several hundreds or thousands of cameras. En esta presentación se tratan varios métodos de reconstrucción 3-D para la obtención de la estructura geométrica de una escena que es visualizada por varias cámaras. Se enfatiza la combinación de modelado geométrico del proceso de formación de la imagen con el uso de herramientas estándar de optimización para estimar los parámetros característicos que describen la geometría de la escena 3-D. En concreto, se presentan métodos de estimación lineales, no lineales y robustos de las geometrías monocular y epipolar como punto de partida para generar reconstrucciones con tres o más cámaras. Algunos ejemplos de sistemas que utilizan este enfoque constructivo son Bundler, PhotoSynth, VideoSurfing, etc., los cuales, en la práctica pueden llegar a reconstruir una escena con varios cientos o miles de cámaras.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing awareness among all kinds of organisations (in business,government and civil society) about the benefits of jointly working with stakeholders to satisfy both their goals and the social demands placed upon them. This is particularly the case within corporate social responsibility (CSR) frameworks. In this regard, multi-criteria tools for decision-making like the analytic hierarchy process (AHP) described in the paper can be useful for the building relationships with stakeholders. Since these tools can reveal decision-maker’s preferences, the integration of opinions from various stakeholders in the decision-making process may result in better and more innovative solutions with significant shared value. This paper is based on ongoing research to assess the feasibility of an AHP-based model to support CSR decisions in large infrastructure projects carried out by Red Electrica de España, the sole transmission agent and operator of the Spanishelectricity system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a flexible Multi-Agent Architecture together with a methodology for indoor location which allows us to locate any mobile station (MS) such as a Laptop, Smartphone, Tablet or a robotic system in an indoor environment using wireless technology. Our technology is complementary to the GPS location finder as it allows us to locate a mobile system in a specific room on a specific floor using the Wi-Fi networks. The idea is that any MS will have an agent known at a Fuzzy Location Software Agent (FLSA) with a minimum capacity processing at its disposal which collects the power received at different Access Points distributed around the floor and establish its location on a plan of the floor of the building. In order to do so it will have to communicate with the Fuzzy Location Manager Software Agent (FLMSA). The FLMSAs are local agents that form part of the management infrastructure of the Wi-Fi network of the Organization. The FLMSA implements a location estimation methodology divided into three phases (measurement, calibration and estimation) for locating mobile stations (MS). Our solution is a fingerprint-based positioning system that overcomes the problem of the relative effect of doors and walls on signal strength and is independent of the network device manufacturer. In the measurement phase, our system collects received signal strength indicator (RSSI) measurements from multiple access points. In the calibration phase, our system uses these measurements in a normalization process to create a radio map, a database of RSS patterns. Unlike traditional radio map-based methods, our methodology normalizes RSS measurements collected at different locations on a floor. In the third phase, we use Fuzzy Controllers to locate an MS on the plan of the floor of a building. Experimental results demonstrate the accuracy of the proposed method. From these results it is clear that the system is highly likely to be able to locate an MS in a room or adjacent room.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demostrated using experimental data obtained on osmotic dehydratation of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses), namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality). Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP) and the Tabular Method (TM), were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An Eulerian multifluid model is used to describe the evolution of an electrospray plume and the flow induced in the surrounding gas by the drag of the electrically charged spray droplets in the space between an injection electrode containing the electrospray source and a collector electrode. The spray is driven by the voltage applied between the two electrodes. Numerical computations and order-of-magnitude estimates for a quiescent gas show that the droplets begin to fly back toward the injection electrode at a certain critical value of the flux of droplets in the spray, which depends very much on the electrical conditions at the injection electrode. As the flux is increased toward its critical value, the electric field induced by the charge of the droplets partially balances the field due to the applied voltage in the vicinity of the injection electrode, leading to a spray that rapidly broadens at a distance from its origin of the order of the stopping distance at which the droplets lose their initial momentum and the effect of their inertia becomes negligible. The axial component of the electric field first changes sign in this region, causing the fly back. The flow induced in the gas significantly changes this picture in the conditions of typical experiments. A gas plume is induced by the drag of the droplets whose entrainment makes the radius of the spray away from the injection electrode smaller than in a quiescent gas, and convects the droplets across the region of negative axial electric field that appears around the origin of the spray when the flux of droplets is increased. This suppresses fly back and allows much higher fluxes to be reached than are possible in a quiescent gas. The limit of large droplet-to-gas mass ratio is discussed. Migration of satellite droplets to the shroud of the spray is reproduced by the Eulerian model, but this process is also affected by the motion of the gas. The gas flow preferentially pushes satellite droplets from the shroud to the core of the spray when the effect of the inertia of the droplets becomes negligible, and thus opposes the well-established electrostatic/inertial mechanism of segregation and may end up concentrating satellite droplets in an intermediate radial region of the spray.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote reprogramming capabilities are one of the major concerns in WSN platforms due to the limitations and constraints that low power wireless nodes poses, especially when energy efficiency during the reprogramming process is a critical factor for extending the battery life of the devices. Moreover, WSNs are based on low-rate protocols in which as greater the amount of data is sent, the more the possibility to lose packets during the transmitting process is. In order to overcome these limitations, in this work a novel on-the-fly reprogramming technique for modifying and updating the application running on the wireless sensor nodes is designed and implemented, based on a partial reprogramming mechanism that significantly reduces the size of the files to be downloaded to the nodes, therefore diminishing their power/time consumption. This powerful mechanism also addresses multi-experimental capabilities because it provides the possibility to download, manage, test and debug multiple applications into the wireless nodes, based on a memory map segmentation of the core. Being an on-the-fly reprogramming process, no additional resources to store and download the configuration file are needed.