927 resultados para Graph DBMS, BenchMarking, OLAP, NoSQL
Resumo:
In this dissertation I draw a connection between quantum adiabatic optimization, spectral graph theory, heat-diffusion, and sub-stochastic processes through the operators that govern these processes and their associated spectra. In particular, we study Hamiltonians which have recently become known as ``stoquastic'' or, equivalently, the generators of sub-stochastic processes. The operators corresponding to these Hamiltonians are of interest in all of the settings mentioned above. I predominantly explore the connection between the spectral gap of an operator, or the difference between the two lowest energies of that operator, and certain equilibrium behavior. In the context of adiabatic optimization, this corresponds to the likelihood of solving the optimization problem of interest. I will provide an instance of an optimization problem that is easy to solve classically, but leaves open the possibility to being difficult adiabatically. Aside from this concrete example, the work in this dissertation is predominantly mathematical and we focus on bounding the spectral gap. Our primary tool for doing this is spectral graph theory, which provides the most natural approach to this task by simply considering Dirichlet eigenvalues of subgraphs of host graphs. I will derive tight bounds for the gap of one-dimensional, hypercube, and general convex subgraphs. The techniques used will also adapt methods recently used by Andrews and Clutterbuck to prove the long-standing ``Fundamental Gap Conjecture''.
Resumo:
Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.
Resumo:
A weighted Bethe graph $B$ is obtained from a weighted generalized Bethe tree by identifying each set of children with the vertices of a graph belonging to a family $F$ of graphs. The operation of identifying the root vertex of each of $r$ weighted Bethe graphs to the vertices of a connected graph $\mathcal{R}$ of order $r$ is introduced as the $\mathcal{R}$-concatenation of a family of $r$ weighted Bethe graphs. It is shown that the Laplacian eigenvalues (when $F$ has arbitrary graphs) as well as the signless Laplacian and adjacency eigenvalues (when the graphs in $F$ are all regular) of the $\mathcal{R}$-concatenation of a family of weighted Bethe graphs can be computed (in a unified way) using the stable and low computational cost methods available for the determination of the eigenvalues of symmetric tridiagonal matrices. Unlike the previous results already obtained on this topic, the more general context of families of distinct weighted Bethe graphs is herein considered.
Resumo:
International audience
Resumo:
Kinematic structure of planar mechanisms addresses the study of attributes determined exclusively by the joining pattern among the links forming a mechanism. The system group classification is central to the kinematic structure and consists of determining a sequence of kinematically and statically independent-simple chains which represent a modular basis for the kinematics and force analysis of the mechanism. This article presents a novel graph-based algorithm for structural analysis of planar mechanisms with closed-loop kinematic structure which determines a sequence of modules (Assur groups) representing the topology of the mechanism. The computational complexity analysis and proof of correctness of the implemented algorithm are provided. A case study is presented to illustrate the results of the devised method.
Resumo:
Reconfigurable hardware can be used to build a multitasking system where tasks are assigned to HW resources at run-time according to the requirements of the running applications. These tasks are frequently represented as direct acyclic graphs and their execution is typically controlled by an embedded processor that schedules the graph execution. In order to improve the efficiency of the system, the scheduler can apply prefetch and reuse techniques that can greatly reduce the reconfiguration latencies. For an embedded processor all these computations represent a heavy computational load that can significantly reduce the system performance. To overcome this problem we have implemented a HW scheduler using reconfigurable resources. In addition we have implemented both prefetch and replacement techniques that obtain as good results as previous complex SW approaches, while demanding just a few clock cycles to carry out the computations. We consider that the HW cost of the system (in our experiments 3% of a Virtex-II PRO xc2vp30 FPGA) is affordable taking into account the great efficiency of the techniques applied to hide the reconfiguration latency and the negligible run-time penalty introduced by the scheduler computations.
Resumo:
Reconfigurable hardware can be used to build multi tasking systems that dynamically adapt themselves to the requirements of the running applications. This is especially useful in embedded systems, since the available resources are very limited and the reconfigurable hardware can be reused for different applications. In these systems computations are frequently represented as task graphs that are executed taking into account their internal dependencies and the task schedule. The management of the task graph execution is critical for the system performance. In this regard, we have developed two dif erent versions, a software module and a hardware architecture, of a generic task-graph execution manager for reconfigurable multi-tasking systems. The second version reduces the run-time management overheads by almost two orders of magnitude. Hence it is especially suitable for systems with exigent timing constraints. Both versions include specific support to optimize the reconfiguration process.
Resumo:
Part 5: Service Orientation in Collaborative Networks
Resumo:
En el desarrollo del capítulo I se establecen los orígenes de la Industria Químico Farmacéutica, así como su representatividad en la economía nacional en donde se refleja que existen oportunidades comerciales para ésta industria en el ámbito nacional. Dentro de los parámetros para identificar a la mediana industria se distinguen que estas deben poseer un nivel de ventas anuales entre los ¢ 6 millones y ¢ 30 millones de colones, sus activos fijos deben oscilar entre los ¢ 218,750.00 y ¢ 2,187,500.00, además su número de empleados debe oscilar entre los 50 y 199. Al desarrollar el marco teórico del Benchmarking se entiende que éste es un proceso sistemático y continuo para evaluar los productos, servicios y procesos de trabajo de las organizaciones que son reconocidas como representantes de las prácticas con el propósito de realizar mejoras organizacionales. Dentro de los tipos de Benchmarking se pueden mencionar el funcional, el competitivo y el interno; el proceso aplicado en el presente estudio consiste en cinco etapas, siendo estas: determinar a qué se le aplicará Benchmarking, formar el equipo de Benchmarking, identificar los socios de Benchmarking, recopilar y analizar la información de Benchmarking y actuar. En el estudio de la mezcla de mercadotecnia se tratan los aspectos del producto, el precio, la promoción y la plaza dentro de estos se mencionan sus conceptos, características, etc. El capítulo II, contiene los objetivos de la investigación los cuales básicamente radican en proporcionar a la Mediana Industria Químico Farmacéutica del Area Metropolitana de San Salvador un modelo de Benchmarking aplicable a las operaciones de comercialización a través de la experiencia obtenida por otras entidades que han demostrado éxito en el área de estudio, tanto en aspectos relacionados al producto, fijación de precios, promoción y distribución de productos. La obtención de la información se obtuvo por medio de un cuestionario que contiene 22 preguntas realizadas a gerentes, visitadores médicos y agentes de venta, quienes constituyen la fuente de información, el universo lo componen 31 laboratorios de los que se extrajo una muestra representativa de 24 laboratorios los cuales se seleccionan de manera aleatoria, estos se dividieron en dos grupos basados en las publicaciones del Indice de Medición de Laboratorios Químico Farmacéuticos, el cual indica los niveles de venta y la participación en el mercado, los cuales quedaron así: Las empresas del TIPO A, agrupa aquellas que en comparación con la entidad sujeta de estudio poseen niveles de venta por encima de está al igual que poseen un nivel superior de participación en el mercado. Las empresas del TIPO B, agrupa aquellas que en comparación con la entidad sujeta de estudio están por debajo en sus niveles de venta al igual que su participación en el mercado. Los resultados de las encuestas se tabula estadísticamente y se extraen aquellas prácticas que realizan las empresas del TIPO A como las mejores en su clase. Recomendando a la empresa en estudio aplicar las prácticas obtenidas en el proceso de comercialización. En el desarrollo del capítulo III, se describe a la empresa modelo, en su organización, sus niveles de venta, etc. posteriormente se aplica el proceso de Benchmarking de acuerdo a las etapas mencionadas en el capítulo I. Al observar los resultados se tiene que con la aplicación del Benchmarking los niveles de venta tienden al incremento, con ello se llega a la conclusión que la aplicación de Benchmarking genera cambios positivos en la organización, recomendando además que esta herramienta administrativa debe aplicarse a la organización en conjunto, con lo cual se obtendrían mejores resultados.
Resumo:
Orígenes de la Investigación: Actualmente las empresas de servicios están cambiando continuamente como consecuencia de la apertura de nuevos mercados, el impacto de la tecnología y el surgimiento de nuevos servicios; lo que obliga a las empresas a desarrollar estrategias mercadológicas que le permitan obtener un crecimiento en la demanda de los servicios que ofrecen, e incrementar el número de clientes, a través de brindar servicios de calidad y de esta manera crear una ventaja competitiva ante otras organizaciones. La investigación tiene su origen, debido a la necesidad de hacerle frente a esos cambios y retos, ya que en el salvador existen empresas que se distinguen y sobresalen de otras por su éxito organizacional, por el número de clientes que poseen, infraestructura, calidad en los servicios y expansión geográfica que han logrado, es necesario entonces identificar cuáles son los factores que hacen que estas organizaciones sean más rentables y exitosas que otras, esto se logra a través de la utilización de la técnica llamada Benchmarking que es la que se utilizara en la presente investigación. Objetivos trazados: La presente investigación tiene como objetivo principal aplicar la técnica del Benchmarking para identificar las estrategias exitosas de mercadeo que utilizan las Empresas Líderes en servicios para atraer y fidelizar a sus clientes. Lo anterior permite presentar a las empresas de servicios una propuesta de estrategias que contribuyan al fortalecimiento de la calidad en los servicios, así como al desempeño del personal que está al frente de la atención al cliente, para que logren competir con mayor eficiencia y efectividad, adecuándose a las exigencias y constantes cambios que se dan en el mercado. Metodología utilizada: para la realización de la Investigación se utilizó el método científico, ya que permitió analizar los elementos que conforman la investigación de una manera objetiva y sistemática, además fue necesario realizar un estudio de campo para recopilar la información directamente del objeto de estudio para lo cual se utilizaron instrumentos tales como: encuesta, entrevista y lista de chequeo, asimismo se utilizó información bibliográfica. Resultados obtenidos. Se identificaron las estrategias de mercadeo utilizadas por las empresas Líderes en servicio que han resultado exitosas y efectivas para atraer y fidelizar a los clientes. Lo que permitió presentar una propuesta de estrategias de mercadeo a las empresas de servicios. Las estrategias se orientan a factores tales como: la Infraestructura (distribución y funcionamiento) adecuado, la diversidad de servicios que ofrece, la expansión geográfica que se logra a través de la creación y apertura de nuevas sucursales o por medio de alianzas estratégicas, el uso adecuado de los diferentes medios de comunicación para dar a conocer los servicios y actividades que desarrolla la empresa, y la atención que brindan cada uno de los empleados que tienen una relación directa con el cliente. Finalmente se pudo concluir que el éxito y desarrollo de cada una de las organizaciones, depende de las estrategias que cada empresa implementa con respecto a la infraestructura, servicios que brinda, expansión geográfica, medios de comunicación y desempeño de los empleados. Es por ello que una de las recomendaciones que se proponen es que las organizaciones deben esforzarse diariamente para poder satisfacer las necesidades de cada uno de sus clientes tomando en cuenta los factores anteriores, que hacen que estos se sientan cómodos, satisfechos y permanezcan fieles a la empresa.
Resumo:
Nowadays, the development of the photovoltaic (PV) technology is consolidated as a source of renewable energy. The research in the topic of maximum improvement on the energy efficiency of the PV plants is today a major challenge. The main requirement for this purpose is to know the performance of each of the PV modules that integrate the PV field in real time. In this respect, a PLC communications based Smart Monitoring and Communications Module, which is able to monitor at PV level their operating parameters, has been developed at the University of Malaga. With this device you can check if any of the panels is suffering any type of overriding performance, due to a malfunction or partial shadowing of its surface. Since these fluctuations in electricity production from a single panel affect the overall sum of all panels that conform a string, it is necessary to isolate the problem and modify the routes of energy through alternative paths in case of PV panels array configuration.
Resumo:
The majority of the organizations store their historical business information in data warehouses which are queried to make strategic decisions by using online analytical processing (OLAP) tools. This information has to be correctly assured against unauthorized accesses, but nevertheless there are a great amount of legacy OLAP applications that have been developed without considering security aspects or these have been incorporated once the system was implemented. This work defines a reverse engineering process that allows us to obtain the conceptual model corresponding to a legacy OLAP application, and also analyses and represents the security aspects that could have established. This process has been aligned with a model-driven architecture for developing secure OLAP applications by defining the transformations needed to automatically apply it. Once the conceptual model has been extracted, it can be easily modified and improved with security, and automatically transformed to generate the new implementation.
Resumo:
68 p.
Resumo:
Dato il recente avvento delle tecnologie NGS, in grado di sequenziare interi genomi umani in tempi e costi ridotti, la capacità di estrarre informazioni dai dati ha un ruolo fondamentale per lo sviluppo della ricerca. Attualmente i problemi computazionali connessi a tali analisi rientrano nel topic dei Big Data, con databases contenenti svariati tipi di dati sperimentali di dimensione sempre più ampia. Questo lavoro di tesi si occupa dell'implementazione e del benchmarking dell'algoritmo QDANet PRO, sviluppato dal gruppo di Biofisica dell'Università di Bologna: il metodo consente l'elaborazione di dati ad alta dimensionalità per l'estrazione di una Signature a bassa dimensionalità di features con un'elevata performance di classificazione, mediante una pipeline d'analisi che comprende algoritmi di dimensionality reduction. Il metodo è generalizzabile anche all'analisi di dati non biologici, ma caratterizzati comunque da un elevato volume e complessità, fattori tipici dei Big Data. L'algoritmo QDANet PRO, valutando la performance di tutte le possibili coppie di features, ne stima il potere discriminante utilizzando un Naive Bayes Quadratic Classifier per poi determinarne il ranking. Una volta selezionata una soglia di performance, viene costruito un network delle features, da cui vengono determinate le componenti connesse. Ogni sottografo viene analizzato separatamente e ridotto mediante metodi basati sulla teoria dei networks fino all'estrapolazione della Signature finale. Il metodo, già precedentemente testato su alcuni datasets disponibili al gruppo di ricerca con riscontri positivi, è stato messo a confronto con i risultati ottenuti su databases omici disponibili in letteratura, i quali costituiscono un riferimento nel settore, e con algoritmi già esistenti che svolgono simili compiti. Per la riduzione dei tempi computazionali l'algoritmo è stato implementato in linguaggio C++ su HPC, con la parallelizzazione mediante librerie OpenMP delle parti più critiche.