902 resultados para k-Means


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 1998-2001 Finland suffered the most severe insect outbreak ever recorded, over 500,000 hectares. The outbreak was caused by the common pine sawfly (Diprion pini L.). The outbreak has continued in the study area, Palokangas, ever since. To find a good method to monitor this type of outbreaks, the purpose of this study was to examine the efficacy of multi-temporal ERS-2 and ENVISAT SAR imagery for estimating Scots pine (Pinus sylvestris L.) defoliation. Three methods were tested: unsupervised k-means clustering, supervised linear discriminant analysis (LDA) and logistic regression. In addition, I assessed if harvested areas could be differentiated from the defoliated forest using the same methods. Two different speckle filters were used to determine the effect of filtering on the SAR imagery and subsequent results. The logistic regression performed best, producing a classification accuracy of 81.6% (kappa 0.62) with two classes (no defoliation, >20% defoliation). LDA accuracy was with two classes at best 77.7% (kappa 0.54) and k-means 72.8 (0.46). In general, the largest speckle filter, 5 x 5 image window, performed best. When additional classes were added the accuracy was usually degraded on a step-by-step basis. The results were good, but because of the restrictions in the study they should be confirmed with independent data, before full conclusions can be made that results are reliable. The restrictions include the small size field data and, thus, the problems with accuracy assessment (no separate testing data) as well as the lack of meteorological data from the imaging dates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A non-hierarchical K-means algorithm is used to cluster 47 years (1960–2006) of 10-day HYSPLIT backward trajectories to the Pico Mountain (PM) observatory on a seasonal basis. The resulting cluster centers identify the major transport pathways and collectively comprise a long-term climatology of transport to the observatory. The transport climatology improves our ability to interpret the observations made there and our understanding of pollution source regions to the station and the central North Atlantic region. I determine which pathways dominate transport to the observatory and examine the impacts of these transport patterns on the O3, NOy, NOx, and CO measurements made there during 2001–2006. Transport from the U.S., Canada, and the Atlantic most frequently reaches the station, but Europe, east Africa, and the Pacific can also contribute significantly depending on the season. Transport from Canada was correlated with the North Atlantic Oscillation (NAO) in spring and winter, and transport from the Pacific was uncorrelated with the NAO. The highest CO and O3 are observed during spring. Summer is also characterized by high CO and O3 and the highest NOy and NOx of any season. Previous studies at the station attributed the summer time high CO and O3 to transport of boreal wildfire emissions (for 2002–2004), and boreal fires continued to affect the station during 2005 and 2006. The particle dispersion model FLEXPART was used to calculate anthropogenic and biomass-burning CO tracer values at the station in an attempt to identify the regions responsible for the high CO and O3 observations during spring and biomass-burning impacts in summer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The primary goal of this project is to demonstrate the practical use of data mining algorithms to cluster a solved steady-state computational fluids simulation (CFD) flow domain into a simplified lumped-parameter network. A commercial-quality code, “cfdMine” was created using a volume-weighted k-means clustering that that can accomplish the clustering of a 20 million cell CFD domain on a single CPU in several hours or less. Additionally agglomeration and k-means Mahalanobis were added as optional post-processing steps to further enhance the separation of the clusters. The resultant nodal network is considered a reduced-order model and can be solved transiently at a very minimal computational cost. The reduced order network is then instantiated in the commercial thermal solver MuSES to perform transient conjugate heat transfer using convection predicted using a lumped network (based on steady-state CFD). When inserting the lumped nodal network into a MuSES model, the potential for developing a “localized heat transfer coefficient” is shown to be an improvement over existing techniques. Also, it was found that the use of the clustering created a new flow visualization technique. Finally, fixing clusters near equipment newly demonstrates a capability to track temperatures near specific objects (such as equipment in vehicles).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In young, first-episode, productive, medication-naive patients with schizophrenia, EEG microstates (building blocks of mentation) tend to be shortened. Koenig et al. [Koenig, T., Lehmann, D., Merlo, M., Kochi, K., Hell, D., Koukkou, M., 1999. A deviant EEG brain microstate in acute, neuroleptic-naïve schizophrenics at rest. European Archives of Psychiatry and Clinical Neuroscience 249, 205–211] suggested that shortening concerned specific microstate classes. Sequence rules (microstate concatenations, syntax) conceivably might also be affected. In 27 patients of the above type and 27 controls, from three centers, multichannel resting EEG was analyzed into microstates using k-means clustering of momentary potential topographies into four microstate classes (A–D). In patients, microstates were shortened in classes B and D (from 80 to 70 ms and from 94 to 82 ms, respectively), occurred more frequently in classes A and C, and covered more time in A and less in B. Topography differed only in class B where LORETA tomography predominantly showed stronger left and anterior activity in patients. Microstate concatenation (syntax) generally were disturbed in patients; specifically, the class sequence A→C→D→A predominated in controls, but was reversed in patients (A→D→C→A). In schizophrenia, information processing in certain classes of mental operations might deviate because of precocious termination. The intermittent occurrence might account for Bleuler's “double bookkeeping.” The disturbed microstate syntax opens a novel physiological comparison of mental operations between patients and controls.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Magnetic resonance temperature imaging (MRTI) is recognized as a noninvasive means to provide temperature imaging for guidance in thermal therapies. The most common method of estimating temperature changes in the body using MR is by measuring the water proton resonant frequency (PRF) shift. Calculation of the complex phase difference (CPD) is the method of choice for measuring the PRF indirectly since it facilitates temperature mapping with high spatiotemporal resolution. Chemical shift imaging (CSI) techniques can provide the PRF directly with high sensitivity to temperature changes while minimizing artifacts commonly seen in CPD techniques. However, CSI techniques are currently limited by poor spatiotemporal resolution. This research intends to develop and validate a CSI-based MRTI technique with intentional spectral undersampling which allows relaxed parameters to improve spatiotemporal resolution. An algorithm based on autoregressive moving average (ARMA) modeling is developed and validated to help overcome limitations of Fourier-based analysis allowing highly accurate and precise PRF estimates. From the determined acquisition parameters and ARMA modeling, robust maps of temperature using the k-means algorithm are generated and validated in laser treatments in ex vivo tissue. The use of non-PRF based measurements provided by the technique is also investigated to aid in the validation of thermal damage predicted by an Arrhenius rate dose model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Improvements in the analysis of microarray images are critical for accurately quantifying gene expression levels. The acquisition of accurate spot intensities directly influences the results and interpretation of statistical analyses. This dissertation discusses the implementation of a novel approach to the analysis of cDNA microarray images. We use a stellar photometric model, the Moffat function, to quantify microarray spots from nylon microarray images. The inherent flexibility of the Moffat shape model makes it ideal for quantifying microarray spots. We apply our novel approach to a Wilms' tumor microarray study and compare our results with a fixed-circle segmentation approach for spot quantification. Our results suggest that different spot feature extraction methods can have an impact on the ability of statistical methods to identify differentially expressed genes. We also used the Moffat function to simulate a series of microarray images under various experimental conditions. These simulations were used to validate the performance of various statistical methods for identifying differentially expressed genes. Our simulation results indicate that tests taking into account the dependency between mean spot intensity and variance estimation, such as the smoothened t-test, can better identify differentially expressed genes, especially when the number of replicates and mean fold change are low. The analysis of the simulations also showed that overall, a rank sum test (Mann-Whitney) performed well at identifying differentially expressed genes. Previous work has suggested the strengths of nonparametric approaches for identifying differentially expressed genes. We also show that multivariate approaches, such as hierarchical and k-means cluster analysis along with principal components analysis, are only effective at classifying samples when replicate numbers and mean fold change are high. Finally, we show how our stellar shape model approach can be extended to the analysis of 2D-gel images by adapting the Moffat function to take into account the elliptical nature of spots in such images. Our results indicate that stellar shape models offer a previously unexplored approach for the quantification of 2D-gel spots. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Air pollution is a big threat and a phenomenon that has a specific impact on human health, in addition, changes that occur in the chemical composition of the atmosphere can change the weather and cause acid rain or ozone destruction. Those are phenomena of global importance. The World Health Organization (WHO) considerates air pollution as one of the most important global priorities. Salamanca, Gto., Mexico has been ranked as one of the most polluted cities in this country. The industry of the area led to a major economic development and rapid population growth in the second half of the twentieth century. The impact in the air quality is important and significant efforts have been made to measure the concentrations of pollutants. The main pollution sources are locally based plants in the chemical and power generation sectors. The registered concerning pollutants are Sulphur Dioxide (SO2) and particles on the order of ∼10 micrometers or less (PM10). The prediction in the concentration of those pollutants can be a powerful tool in order to take preventive measures such as the reduction of emissions and alerting the affected population. In this PhD thesis we propose a model to predict concentrations of pollutants SO2 and PM10 for each monitoring booth in the Atmospheric Monitoring Network Salamanca (REDMAS - for its spanish acronym). The proposed models consider the use of meteorological variables as factors influencing the concentration of pollutants. The information used along this work is the current real data from REDMAS. In the proposed model, Artificial Neural Networks (ANN) combined with clustering algorithms are used. The type of ANN used is the Multilayer Perceptron with a hidden layer, using separate structures for the prediction of each pollutant. The meteorological variables used for prediction were: Wind Direction (WD), wind speed (WS), Temperature (T) and relative humidity (RH). Clustering algorithms, K-means and Fuzzy C-means, are used to find relationships between air pollutants and weather variables under consideration, which are added as input of the RNA. Those relationships provide information to the ANN in order to obtain the prediction of the pollutants. The results of the model proposed in this work are compared with the results of a multivariate linear regression and multilayer perceptron neural network. The evaluation of the prediction is calculated with the mean absolute error, the root mean square error, the correlation coefficient and the index of agreement. The results show the importance of meteorological variables in the prediction of the concentration of the pollutants SO2 and PM10 in the city of Salamanca, Gto., Mexico. The results show that the proposed model perform better than multivariate linear regression and multilayer perceptron neural network. The models implemented for each monitoring booth have the ability to make predictions of air quality that can be used in a system of real-time forecasting and human health impact analysis. Among the main results of the development of this thesis we can cite: A model based on artificial neural network combined with clustering algorithms for prediction with a hour ahead of the concentration of each pollutant (SO2 and PM10) is proposed. A different model was designed for each pollutant and for each of the three monitoring booths of the REDMAS. A model to predict the average of pollutant concentration in the next 24 hours of pollutants SO2 and PM10 is proposed, based on artificial neural network combined with clustering algorithms. Model was designed for each booth of the REDMAS and each pollutant separately. Resumen La contaminación atmosférica es una amenaza aguda, constituye un fenómeno que tiene particular incidencia sobre la salud del hombre. Los cambios que se producen en la composición química de la atmósfera pueden cambiar el clima, producir lluvia ácida o destruir el ozono, fenómenos todos ellos de una gran importancia global. La Organización Mundial de la Salud (OMS) considera la contaminación atmosférica como una de las más importantes prioridades mundiales. Salamanca, Gto., México; ha sido catalogada como una de las ciudades más contaminadas en este país. La industria de la zona propició un importante desarrollo económico y un crecimiento acelerado de la población en la segunda mitad del siglo XX. Las afectaciones en el aire son graves y se han hecho importantes esfuerzos por medir las concentraciones de los contaminantes. Las principales fuentes de contaminación son fuentes fijas como industrias químicas y de generación eléctrica. Los contaminantes que se han registrado como preocupantes son el Bióxido de Azufre (SO2) y las Partículas Menores a 10 micrómetros (PM10). La predicción de las concentraciones de estos contaminantes puede ser una potente herramienta que permita tomar medidas preventivas como reducción de emisiones a la atmósfera y alertar a la población afectada. En la presente tesis doctoral se propone un modelo de predicción de concentraci ón de los contaminantes más críticos SO2 y PM10 para cada caseta de monitorización de la Red de Monitorización Atmosférica de Salamanca (REDMAS). Los modelos propuestos plantean el uso de las variables meteorol ógicas como factores que influyen en la concentración de los contaminantes. La información utilizada durante el desarrollo de este trabajo corresponde a datos reales obtenidos de la REDMAS. En el Modelo Propuesto (MP) se aplican Redes Neuronales Artificiales (RNA) combinadas con algoritmos de agrupamiento. La RNA utilizada es el Perceptrón Multicapa con una capa oculta, utilizando estructuras independientes para la predicción de cada contaminante. Las variables meteorológicas disponibles para realizar la predicción fueron: Dirección de Viento (DV), Velocidad de Viento (VV), Temperatura (T) y Humedad Relativa (HR). Los algoritmos de agrupamiento K-means y Fuzzy C-means son utilizados para encontrar relaciones existentes entre los contaminantes atmosféricos en estudio y las variables meteorológicas. Dichas relaciones aportan información a las RNA para obtener la predicción de los contaminantes, la cual es agregada como entrada de las RNA. Los resultados del modelo propuesto en este trabajo son comparados con los resultados de una Regresión Lineal Multivariable (RLM) y un Perceptrón Multicapa (MLP). La evaluación de la predicción se realiza con el Error Medio Absoluto, la Raíz del Error Cuadrático Medio, el coeficiente de correlación y el índice de acuerdo. Los resultados obtenidos muestran la importancia de las variables meteorológicas en la predicción de la concentración de los contaminantes SO2 y PM10 en la ciudad de Salamanca, Gto., México. Los resultados muestran que el MP predice mejor la concentración de los contaminantes SO2 y PM10 que los modelos RLM y MLP. Los modelos implementados para cada caseta de monitorizaci ón tienen la capacidad para realizar predicciones de calidad del aire, estos modelos pueden ser implementados en un sistema que permita realizar la predicción en tiempo real y analizar el impacto en la salud de la población. Entre los principales resultados obtenidos del desarrollo de esta tesis podemos citar: Se propone un modelo basado en una red neuronal artificial combinado con algoritmos de agrupamiento para la predicción con una hora de anticipaci ón de la concentración de cada contaminante (SO2 y PM10). Se diseñó un modelo diferente para cada contaminante y para cada una de las tres casetas de monitorización de la REDMAS. Se propone un modelo de predicción del promedio de la concentración de las próximas 24 horas de los contaminantes SO2 y PM10, basado en una red neuronal artificial combinado con algoritmos de agrupamiento. Se diseñó un modelo para cada caseta de monitorización de la REDMAS y para cada contaminante por separado.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Industrial applications of computer vision sometimes require detection of atypical objects that occur as small groups of pixels in digital images. These objects are difficult to single out because they are small and randomly distributed. In this work we propose an image segmentation method using the novel Ant System-based Clustering Algorithm (ASCA). ASCA models the foraging behaviour of ants, which move through the data space searching for high data-density regions, and leave pheromone trails on their path. The pheromone map is used to identify the exact number of clusters, and assign the pixels to these clusters using the pheromone gradient. We applied ASCA to detection of microcalcifications in digital mammograms and compared its performance with state-of-the-art clustering algorithms such as 1D Self-Organizing Map, k-Means, Fuzzy c-Means and Possibilistic Fuzzy c-Means. The main advantage of ASCA is that the number of clusters needs not to be known a priori. The experimental results show that ASCA is more efficient than the other algorithms in detecting small clusters of atypical data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we propose an image acquisition and processing methodology (framework) developed for performance in-field grapes and leaves detection and quantification, based on a six step methodology: 1) image segmentation through Fuzzy C-Means with Gustafson Kessel (FCM-GK) clustering; 2) obtaining of FCM-GK outputs (centroids) for acting as seeding for K-Means clustering; 3) Identification of the clusters generated by K-Means using a Support Vector Machine (SVM) classifier. 4) Performance of morphological operations over the grapes and leaves clusters in order to fill holes and to eliminate small pixels clusters; 5)Creation of a mosaic image by Scale-Invariant Feature Transform (SIFT) in order to avoid overlapping between images; 6) Calculation of the areas of leaves and grapes and finding of the centroids in the grape bunches. Image data are collected using a colour camera fixed to a mobile platform. This platform was developed to give a stabilized surface to guarantee that the images were acquired parallel to de vineyard rows. In this way, the platform avoids the distortion of the images that lead to poor estimation of the areas. Our preliminary results are promissory, although they still have shown that it is necessary to implement a camera stabilization system to avoid undesired camera movements, and also a parallel processing procedure in order to speed up the mosaicking process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work explores the automatic recognition of physical activity intensity patterns from multi-axial accelerometry and heart rate signals. Data collection was carried out in free-living conditions and in three controlled gymnasium circuits, for a total amount of 179.80 h of data divided into: sedentary situations (65.5%), light-to-moderate activity (17.6%) and vigorous exercise (16.9%). The proposed machine learning algorithms comprise the following steps: time-domain feature definition, standardization and PCA projection, unsupervised clustering (by k-means and GMM) and a HMM to account for long-term temporal trends. Performance was evaluated by 30 runs of a 10-fold cross-validation. Both k-means and GMM-based approaches yielded high overall accuracy (86.97% and 85.03%, respectively) and, given the imbalance of the dataset, meritorious F-measures (up to 77.88%) for non-sedentary cases. Classification errors tended to be concentrated around transients, what constrains their practical impact. Hence, we consider our proposal to be suitable for 24 h-based monitoring of physical activity in ambulatory scenarios and a first step towards intensity-specific energy expenditure estimators

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los avances en el hardware permiten disponer de grandes volúmenes de datos, surgiendo aplicaciones que deben suministrar información en tiempo cuasi-real, la monitorización de pacientes, ej., el seguimiento sanitario de las conducciones de agua, etc. Las necesidades de estas aplicaciones hacen emerger el modelo de flujo de datos (data streaming) frente al modelo almacenar-para-despuésprocesar (store-then-process). Mientras que en el modelo store-then-process, los datos son almacenados para ser posteriormente consultados; en los sistemas de streaming, los datos son procesados a su llegada al sistema, produciendo respuestas continuas sin llegar a almacenarse. Esta nueva visión impone desafíos para el procesamiento de datos al vuelo: 1) las respuestas deben producirse de manera continua cada vez que nuevos datos llegan al sistema; 2) los datos son accedidos solo una vez y, generalmente, no son almacenados en su totalidad; y 3) el tiempo de procesamiento por dato para producir una respuesta debe ser bajo. Aunque existen dos modelos para el cómputo de respuestas continuas, el modelo evolutivo y el de ventana deslizante; éste segundo se ajusta mejor en ciertas aplicaciones al considerar únicamente los datos recibidos más recientemente, en lugar de todo el histórico de datos. En los últimos años, la minería de datos en streaming se ha centrado en el modelo evolutivo. Mientras que, en el modelo de ventana deslizante, el trabajo presentado es más reducido ya que estos algoritmos no sólo deben de ser incrementales si no que deben borrar la información que caduca por el deslizamiento de la ventana manteniendo los anteriores tres desafíos. Una de las tareas fundamentales en minería de datos es la búsqueda de agrupaciones donde, dado un conjunto de datos, el objetivo es encontrar grupos representativos, de manera que se tenga una descripción sintética del conjunto. Estas agrupaciones son fundamentales en aplicaciones como la detección de intrusos en la red o la segmentación de clientes en el marketing y la publicidad. Debido a las cantidades masivas de datos que deben procesarse en este tipo de aplicaciones (millones de eventos por segundo), las soluciones centralizadas puede ser incapaz de hacer frente a las restricciones de tiempo de procesamiento, por lo que deben recurrir a descartar datos durante los picos de carga. Para evitar esta perdida de datos, se impone el procesamiento distribuido de streams, en concreto, los algoritmos de agrupamiento deben ser adaptados para este tipo de entornos, en los que los datos están distribuidos. En streaming, la investigación no solo se centra en el diseño para tareas generales, como la agrupación, sino también en la búsqueda de nuevos enfoques que se adapten mejor a escenarios particulares. Como ejemplo, un mecanismo de agrupación ad-hoc resulta ser más adecuado para la defensa contra la denegación de servicio distribuida (Distributed Denial of Services, DDoS) que el problema tradicional de k-medias. En esta tesis se pretende contribuir en el problema agrupamiento en streaming tanto en entornos centralizados y distribuidos. Hemos diseñado un algoritmo centralizado de clustering mostrando las capacidades para descubrir agrupaciones de alta calidad en bajo tiempo frente a otras soluciones del estado del arte, en una amplia evaluación. Además, se ha trabajado sobre una estructura que reduce notablemente el espacio de memoria necesario, controlando, en todo momento, el error de los cómputos. Nuestro trabajo también proporciona dos protocolos de distribución del cómputo de agrupaciones. Se han analizado dos características fundamentales: el impacto sobre la calidad del clustering al realizar el cómputo distribuido y las condiciones necesarias para la reducción del tiempo de procesamiento frente a la solución centralizada. Finalmente, hemos desarrollado un entorno para la detección de ataques DDoS basado en agrupaciones. En este último caso, se ha caracterizado el tipo de ataques detectados y se ha desarrollado una evaluación sobre la eficiencia y eficacia de la mitigación del impacto del ataque. ABSTRACT Advances in hardware allow to collect huge volumes of data emerging applications that must provide information in near-real time, e.g., patient monitoring, health monitoring of water pipes, etc. The data streaming model emerges to comply with these applications overcoming the traditional store-then-process model. With the store-then-process model, data is stored before being consulted; while, in streaming, data are processed on the fly producing continuous responses. The challenges of streaming for processing data on the fly are the following: 1) responses must be produced continuously whenever new data arrives in the system; 2) data is accessed only once and is generally not maintained in its entirety, and 3) data processing time to produce a response should be low. Two models exist to compute continuous responses: the evolving model and the sliding window model; the latter fits best with applications must be computed over the most recently data rather than all the previous data. In recent years, research in the context of data stream mining has focused mainly on the evolving model. In the sliding window model, the work presented is smaller since these algorithms must be incremental and they must delete the information which expires when the window slides. Clustering is one of the fundamental techniques of data mining and is used to analyze data sets in order to find representative groups that provide a concise description of the data being processed. Clustering is critical in applications such as network intrusion detection or customer segmentation in marketing and advertising. Due to the huge amount of data that must be processed by such applications (up to millions of events per second), centralized solutions are usually unable to cope with timing restrictions and recur to shedding techniques where data is discarded during load peaks. To avoid discarding of data, processing of streams (such as clustering) must be distributed and adapted to environments where information is distributed. In streaming, research does not only focus on designing for general tasks, such as clustering, but also in finding new approaches that fit bests with particular scenarios. As an example, an ad-hoc grouping mechanism turns out to be more adequate than k-means for defense against Distributed Denial of Service (DDoS). This thesis contributes to the data stream mining clustering technique both for centralized and distributed environments. We present a centralized clustering algorithm showing capabilities to discover clusters of high quality in low time and we provide a comparison with existing state of the art solutions. We have worked on a data structure that significantly reduces memory requirements while controlling the error of the clusters statistics. We also provide two distributed clustering protocols. We focus on the analysis of two key features: the impact on the clustering quality when computation is distributed and the requirements for reducing the processing time compared to the centralized solution. Finally, with respect to ad-hoc grouping techniques, we have developed a DDoS detection framework based on clustering.We have characterized the attacks detected and we have evaluated the efficiency and effectiveness of mitigating the attack impact.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La planificación y las políticas de transporte no pueden descuidar la calidad del servicio, considerando que influye notablemente en el cambio modal del coche hacia otros medios de transporte más sostenibles. El concepto se aplica también a los intercambiadores de transporte público, los nodos del sistema donde se cruzan las distintas redes del transporte público y privado. Aunque se han logrado numerosos avances para medir y evaluar la calidad en el sector del transporte público, se han dedicado relativamente pocos esfuerzos a investigar estos aspectos relacionados con la calidad de los intercambiadores del transporte público. Este trabajo de investigación se concentra en la calidad del servicio de la transferencia modal en los intercambiadores interurbanos, según la perspectiva de los viajeros. Su objetivo es identificar los factores clave de la calidad del servicio y los perfiles de los viajeros en los intercambiadores. La investigación es exploratoria y ofrece información acerca de la percepción de los viajeros intermodales relacionada con los aspectos de la calidad, aportando nuevos elementos y datos para adentrarse en estudios más detallados. La metodología del trabajo combina técnicas de análisis estadístico multivariante para analizar los datos de las encuestas sobre la satisfacción de los clientes y se subdivide en tres etapas. En primer lugar, se ha implementado el análisis de correspondencias múltiples para explorar los constructos latentes relacionados con la satisfacción de las características cualitativas de los intercambiadores interurbanos, identificando así los factores clave de la calidad. En segundo lugar, se ha aplicado un análisis de conglomerados de k-medias sobre los factores clave de calidad para clasificar a los viajeros en grupos de usuarios de transportes homogéneos, de acuerdo con su percepción de satisfacción, identificando de este modo los perfiles de los viajeros. Por último, se han formulado sugerencias y recomendaciones sobre la calidad para respaldar la formulación de políticas, estableciendo las prioridades para los intercambiadores interurbanos. La metodología se aplicó en cuatro intercambiadores interurbanos (estaciones de ferrocarriles o de autobuses ) en Madrid, Zaragoza, Gothenburg y Lion, analizando los datos recogidos mediante una encuesta de satisfacción del cliente llevada a cabo en 2011 en los cuatro casos de estudio, donde se interconectan distintos medios de transporte público y privado, de corta y larga distancia. Se recogieron datos sobre la satisfacción de los viajeros con 26 criterios de calidad, así como información sobre aspectos socio-económicos y pautas de comportamiento de viajes. Mediante el análisis de correspondencias múltiples se identificaron 4-5 factores clave de calidad en cada intercambiador, que se asocian principalmente con el sistema de emisión de billetes, el confort y la interconexión, mientras que los viajeros no perciben los temas clásicos como la información. Mediante el análisis de conglomerados se identificaron 2-5 perfiles de viajeros en cada intercambiador. Se reconocieron dos grupos de viajeros en casi todos los casos de estudio: viajeros de cercanía/trabajadores y turistas. Por lo que concierne a las prioridades para apoyar a las partes interesadas en la formulación de políticas, la expedición de billetes es el factor clave para los intercambiadores interurbanos españoles, mientras que la interconexión y los aspectos temporales se destacan en los intercambiadores de Francia y Suecia. Quality of Service can not be neglected in public transport planning and policy making, since it strongly influences modal shifts from car to more sustainable modes. This concept is also related to Public Transport interchanges, the nodes of the transport system where the different sub-systems of public passenger transport and personal vehicles meet. Although a lot of progress has been generally done to measure and assess quality in public transport sector, relatively little investigation has been conducted on quality at PT interchanges. This research work focusses on Quality of Service in the use of transfer facilities at interurban interchanges, according to current travellers’ perspective. It aims at identifying key quality factors and travellers profiles at interurban interchanges. The research is exploratory and offers insight into intermodal travellers’ perception on quality aspects, providing new elements and inputs for more definitive investigation. The methodology of the work combines multivariate statistical techniques to analyse data from customer satisfaction surveys and is subdivided in three steps. Firstly, multiple correspondence analysis was performed to explore latent constructs as concern satisfaction of quality attributes at interurban interchanges, thus identifying the so-called Key Quality Factor. Secondly, k-means cluster analysis was implemented on the key quality factors to classify travellers in homogeneous groups of transport users, according to their perception of satisfaction, thus identifying the so-called Travellers Profiles. Finally, hints and recommendations on quality were identified to support policy making, setting priorities for interurban interchanges. The methodology was applied at four interurban interchanges in Madrid, Zaragoza, Gothenburg and Lyon, analysing the data collected through a customer satisfaction survey carried out in 2011 at the four railway or bus stations where different modes of public and private transport are interconnected covering both short and long trips. Data on travellers’ satisfaction with 26 quality attributes were collected, as well as information on socio-economical and travel patterns. Through multiple correspondence analysis were identified 4-5 key quality factors per interchange. They are mainly related to ticketing, comfort and connectivity, while classical issues, as information, are not perceived as important by travellers’. Through cluster analysis were identified 2-5 travellers profiles per interchange. Two groups of travellers can be found in almost all case studies: commuter / business travellers and holiday travellers. As regards the priorities to support stakeholders in policy making, ticketing is the key-issue for the Spanish interurban interchanges, while connectivity and temporal issues emerge in the French and Swedish case studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabajo esta orientado a resolver el problema de la caracterización de la copa de arboles frutales para la aplicacion localizada de fitosanitarios. Esta propuesta utiliza un mapa de profundidad (Depth image) y una imagen RGB combinadas (RGB-D), proporcionados por el sensor Kinect de Microsoft, para aplicar pesticidas de forma localizada. A través del mapa de profundidad se puede estimar la densidad de la copa y a partir de esta información determinar qué boquillas se deben abrir en cada momento. Se desarrollaron algoritmos implementados en Matlab que permiten además de la adquisición de las imágenes RGB-D, aplicar plaguicidas sólo a hojas y/o frutos según se desee. Estos algoritmos fueron implementados en un software que se comunica con el entorno de desarrollo "Kinect Windows SDK", encargado de extraer las imágenes desde el sensor Kinect. Por otra parte, para identificar hojas, se implementaron algoritmos de clasificación e identificación. Los algoritmos de clasificación utilizados fueron "Fuzzy C-Means con Gustafson Kessel" (FCM-GK) y "K-Means". Los centroides o prototipos de cada clase generados por FCM-GK fueron usados como semilla para K-Means, para acelerar la convergencia del algoritmo y mantener la coherencia temporal en los grupos generados por K-Means. Los algoritmos de clasificación fueron aplicados sobre las imágenes transformadas al espacio de color L*a*b*; específicamente se emplearon los canales a*, b* (canales cromáticos) con el fin de reducir el efecto de la luz sobre los colores. Los algoritmos de clasificación fueron configurados para buscar cuatro grupos: hojas, porosidad, frutas y tronco. Una vez que el clasificador genera los prototipos de los grupos, un clasificador denominado Máquina de Soporte Vectorial, que utiliza como núcleo una función Gaussiana base radial, identifica la clase de interés (hojas). La combinación de estos algoritmos ha mostrado bajos errores de clasificación, rendimiento del 4% de error en la identificación de hojas. Además, estos algoritmos de procesamiento de hasta 8.4 imágenes por segundo, lo que permite su aplicación en tiempo real. Los resultados demuestran la viabilidad de utilizar el sensor "Kinect" para determinar dónde y cuándo aplicar pesticidas. Por otra parte, también muestran que existen limitaciones en su uso, impuesta por las condiciones de luz. En otras palabras, es posible usar "Kinect" en exteriores, pero durante días nublados, temprano en la mañana o en la noche con iluminación artificial, o añadiendo un parasol en condiciones de luz intensa.