973 resultados para Dynamic programming language
Resumo:
El consumo de combustible en un automóvil es una característica que se intenta mejorar continuamente debido a los precios del carburante y a la creciente conciencia medioambiental. Esta tesis doctoral plantea un algoritmo de optimización del consumo que tiene en cuenta las especificaciones técnicas del vehículo, el perfil de orografía de la carretera y el tráfico presente en ella. El algoritmo de optimización calcula el perfil de velocidad óptima que debe seguir el vehículo para completar un recorrido empleando un tiempo de viaje especificado. El cálculo del perfil de velocidad óptima considera los valores de pendiente de la carretera así como también las condiciones de tráfico vehicular de la franja horaria en que se realiza el recorrido. El algoritmo de optimización reacciona ante condiciones de tráfico cambiantes y adapta continuamente el perfil óptimo de velocidad para que el vehículo llegue al destino cumpliendo el horario de llegada establecido. La optimización de consumo es aplicada en vehículos convencionales de motor de combustión interna y en vehículos híbridos tipo serie. Los datos de consumo utilizados por el algoritmo de optimización se obtienen mediante la simulación de modelos cuasi-estáticos de los vehículos. La técnica de minimización empleada por el algoritmo es la Programación Dinámica. El algoritmo divide la optimización del consumo en dos partes claramente diferenciadas y aplica la Programación Dinámica sobre cada una de ellas. La primera parte corresponde a la optimización del consumo del vehículo en función de las condiciones de tráfico. Esta optimización calcula un perfil de velocidad promedio que evita, cuando es posible, las retenciones de tráfico. El tiempo de viaje perdido durante una retención de tráfico debe recuperarse a través de un aumento posterior de la velocidad promedio que incrementaría el consumo del vehículo. La segunda parte de la optimización es la encargada del cálculo de la velocidad óptima en función de la orografía y del tiempo de viaje disponible. Dado que el consumo de combustible del vehículo se incrementa cuando disminuye el tiempo disponible para finalizar un recorrido, esta optimización utiliza factores de ponderación para modular la influencia que tiene cada una de estas dos variables en el proceso de minimización. Aunque los factores de ponderación y la orografía de la carretera condicionan el nivel de ahorro de la optimización, los perfiles de velocidad óptima calculados logran ahorros de consumo respecto de un perfil de velocidad constante que obtiene el mismo tiempo de recorrido. Las simulaciones indican que el ahorro de combustible del vehículo convencional puede lograr hasta un 8.9% mientras que el ahorro de energía eléctrica del vehículo híbrido serie un 2.8%. El algoritmo fusiona la optimización en función de las condiciones del tráfico y la optimización en función de la orografía durante el cálculo en tiempo real del perfil óptimo de velocidad. La optimización conjunta se logra cuando el perfil de velocidad promedio resultante de la optimización en función de las condiciones de tráfico define los valores de los factores de ponderación de la optimización en función de la orografía. Aunque el nivel de ahorro de la optimización conjunta depende de las condiciones de tráfico, de la orografía, del tiempo de recorrido y de las características propias del vehículo, las simulaciones indican ahorros de consumo superiores al 6% en ambas clases de vehículo respecto a optimizaciones que no logran evitar retenciones de tráfico en la carretera. ABSTRACT Fuel consumption of cars is a feature that is continuously being improved due to the fuel price and an increasing environmental awareness. This doctoral dissertation describes an optimization algorithm to decrease the fuel consumption taking into account the technical specifications of the vehicle, the terrain profile of the road and the traffic conditions of the trip. The algorithm calculates the optimal speed profile that completes a trip having a specified travel time. This calculation considers the road slope and the expected traffic conditions during the trip. The optimization algorithm is also able to react to changing traffic conditions and tunes the optimal speed profile to reach the destination within the specified arrival time. The optimization is applied on a conventional vehicle and also on a Series Hybrid Electric vehicle (SHEV). The fuel consumption optimization algorithm uses data obtained from quasi-static simulations. The algorithm is based on Dynamic Programming and divides the fuel consumption optimization problem into two parts. The first part of the optimization process reduces the fuel consumption according to foreseeable traffic conditions. It calculates an average speed profile that tries to avoid, if possible, the traffic jams on the road. Traffic jams that delay drivers result in higher vehicle speed to make up for lost time. A higher speed of the vehicle within an already defined time scheme increases fuel consumption. The second part of the optimization process is in charge of calculating the optimal speed profile according to the road slope and the remaining travel time. The optimization tunes the fuel consumption and travel time relevancies by using two penalty factors. Although the optimization results depend on the road slope and the travel time, the optimal speed profile produces improvements of 8.9% on the fuel consumption of the conventional car and of 2.8% on the spent energy of the hybrid vehicle when compared with a constant speed profile. The two parts of the optimization process are combined during the Real-Time execution of the algorithm. The average speed profile calculated by the optimization according to the traffic conditions provides values for the two penalty factors utilized by the second part of the optimization process. Although the savings depend on the road slope, traffic conditions, vehicle features, and the remaining travel time, simulations show that this joint optimization process can improve the energy consumption of the two vehicles types by more than 6%.
Resumo:
El objetivo principal de este proyecto ha sido introducir aprendizaje automático en la aplicación FleSe. FleSe es una aplicación web que permite realizar consultas borrosas sobre bases de datos nítidos. Para llevar a cabo esta función la aplicación utiliza unos criterios para definir los conceptos borrosos usados para llevar a cabo las consultas. FleSe además permite que el usuario cambie estas personalizaciones. Es aquí donde introduciremos el aprendizaje automático, de tal manera que los criterios por defecto cambien y aprendan en función de las personalizaciones que van realizando los usuarios. Los objetivos secundarios han sido familiarizarse con el desarrollo y diseño web, al igual que recordar y ampliar el conocimiento sobre lógica borrosa y el lenguaje de programación lógica Ciao-Prolog. A lo largo de la realización del proyecto y sobre todo después del estudio de los resultados se demuestra que la agrupación de los usuarios marca la diferencia con la última versión de la aplicación. Esto se basa en la siguiente idea, podemos usar un algoritmo de aprendizaje automático sobre las personalizaciones de los criterios de todos los usuarios, pero la gran diversidad de opiniones de los usuarios puede llevar al algoritmo a concluir criterios erróneos o no representativos. Para solucionar este problema agrupamos a los usuarios intentando que cada grupo tengan la misma opinión o mismo criterio sobre el concepto. Y después de haber realizado las agrupaciones usar el algoritmo de aprendizaje automático para precisar el criterio por defecto de cada grupo de usuarios. Como posibles mejoras para futuras versiones de la aplicación FleSe sería un mejor control y manejo del ejecutable plserver. Este archivo se encarga de permitir a la aplicación web usar el lenguaje de programación lógica Ciao-Prolog para llevar a cabo la lógica borrosa relacionada con las consultas. Uno de los problemas más importantes que ofrece plserver es que bloquea el hilo de ejecución al intentar cargar un archivo con errores y en caso de ocurrir repetidas veces bloquea todas las peticiones siguientes bloqueando la aplicación. Pensando en los usuarios y posibles clientes, sería también importante permitir que FleSe trabajase con bases de datos de SQL en vez de almacenar la base de datos en los archivos de Prolog. Otra posible mejora basarse en distintas características a la hora de agrupar los usuarios dependiendo de los conceptos borrosos que se van ha utilizar en las consultas. Con esto se conseguiría que para cada concepto borroso, se generasen distintos grupos de usuarios, los cuales tendrían opiniones distintas sobre el concepto en cuestión. Así se generarían criterios por defecto más precisos para cada usuario y cada concepto borroso.---ABSTRACT---The main objective of this project has been to introduce machine learning in the application FleSe. FleSe is a web application that makes fuzzy queries over databases with precise information, using defined criteria to define the fuzzy concepts used by the queries. The application allows the users to change and custom these criteria. On this point is where the machine learning would be introduced, so FleSe learn from every new user customization of the criteria in order to generate a new default value of it. The secondary objectives of this project were get familiar with web development and web design in order to understand the how the application works, as well as refresh and improve the knowledge about fuzzy logic and logic programing. During the realization of the project and after the study of the results, I realized that clustering the users in different groups makes the difference between this new version of the application and the previous. This conclusion follows the next idea, we can use an algorithm to introduce machine learning over the criteria that people have, but the problem is the diversity of opinions and judgements that exists, making impossible to generate a unique correct criteria for all the users. In order to solve this problem, before using the machine learning methods, we cluster the users in order to make groups that have the same opinion, and afterwards, use the machine learning methods to precise the default criteria of each users group. The future improvements that could be important for the next versions of FleSe will be to control better the behaviour of the plserver file, that cost many troubles at the beginning of this project and it also generate important errors in the previous version. The file plserver allows the web application to use Ciao-Prolog, a logic programming language that control and manage all the fuzzy logic. One of the main problems with plserver is that when the user uploads a file with errors, it will block the thread and when this happens multiple times it will start blocking all the requests. Oriented to the customer, would be important as well to allow FleSe to manage and work with SQL databases instead of store the data in the Prolog files. Another possible improvement would that the cluster algorithm would be based on different criteria depending on the fuzzy concepts that the selected Prolog file have. This will generate more meaningful clusters, and therefore, the default criteria offered to the users will be more precise.
Resumo:
El presente trabajo consiste en el estudio de la viabilidad en el uso de tres posibles opciones orientadas a la captura de la posición y la postura de personas en entornos reales, así como el diseño e implementación de un prototipo de captura en cada uno de ellos. También se incluye una comparativa con el fin de destacar los pros y los contras de cada solución. Una de las alternativas para llevarlo a cabo consiste en un sistema de tracking óptico por infrarrojos de alta calidad y precisión, como es Optitrack; la segunda se basa en una solución de bajo coste como es el periférico Kinect de Microsoft y la tercera consiste en la combinación de ambos dispositivos para encontrar un equilibrio entre precisión y economía, tomando los puntos fuertes de cada uno para contrarrestar sus debilidades. Uno de los puntos importantes del trabajo es que el uso de los prototipos de captura está orientado a entornos de trabajo reales (en concreto en la captura de los movimientos del personal que trabaja en un quirófano), así que han sido necesarias pruebas para minimizar el efecto de las fuentes de luz en los sistemas de infrarrojos, el estudio de los dispositivos para determinar el número de personas que son capaces de capturar a la vez sin que esto afecte a su rendimiento y el nivel de invasión de los dispositivos en los trabajadores (marcadores para el tracking), además de los mecanismos apropiados para minimizar el impacto de las oclusiones utilizando métodos de interpolación y ayudándose del conocimiento del contexto, las restricciones de movimiento del cuerpo humano y la evolución en el tiempo. Se han desarrollado conocimientos en el funcionamiento y configuración dispositivos como el sistema de captura Optitrack de Natural Point y el sistema de detección de movimiento Kinect desarrollado por Microsoft. También se ha aprendido el funcionamiento del entorno de desarrollo y motor de videojuegos multiplataforma homónimos Unity y del lenguaje de programación C# que utiliza dicho entorno para sus scripts de control, así como los protocolos de comunicación entre los distintos sistemas que componen los prototipos como son VRPN y NatNet.---ABSTRACT---This project is about a viability study in the use of three possible options, oriented towards the capture of the position and view of people in a real environment, as well as the design and implementation of a capturing prototype in each of them. A comparative study is also included, in order to emphasise the pros and cons of each solution. One of the alternatives consists of an optical tracking system via high quality and precision infrareds, like Optitrack; the second is based on a low cost solution, such as Microsoft’s Kinect peripheral, and the third consists on a combination of both devices to find a balance between precision and price, taking the strong points of each of the mechanisms to make up for the weaknesses. One of the important parts of this project is that the use of the capturing prototypes is directed towards real life work situations (specifically towards the capturing of the movements of surgery personnel), so various tests have been necessary in order to minimize the effect of light sources in infrared systems, the study of the devices to determine the number of people that they are capable of capturing simultaneously without affecting their performance and the invasion level of the devices towards the workers (tracking markers), as well as the mechanisms adopted to minimize the impact of the occlusions using interpolation methods and with help of the knowledge of the surroundings, the human movement restrictions and the passing of time. Knowledge has been developed on the functioning and configuration of the devices such as Natural Point’s Optitrak capturing system, and the Kinect movement detection system developed by Microsoft. We have also learned about the operating of the development and incentive environment of multiplatform videogames of namesake Unity and of C# programming language, which uses said environment for its control scripts, as well as communication protocols between the different systems that make up prototypes like VRPN and NatNet.
Resumo:
as tecnologías emergentes como el cloud computing y los dispositivos móviles están creando una oportunidad sin precedentes para mejorar el sistema educativo, permitiendo tanto a los educadores personalizar y mejorar la experiencia de aprendizaje, como facilitar a los estudiantes que adquieran conocimientos sin importar dónde estén. Por otra parte, a través de técnicas de gamificacion será posible promover y motivar a los estudiantes a que aprendan materias arduas haciendo que la experiencia sea más motivadora. Los juegos móviles pueden ser el camino correcto para dar soporte a esta experiencia de aprendizaje mejorada. Este proyecto integra el diseño y desarrollo de una arquitectura en la nube altamente escalable y con alto rendimiento, así como el propio cliente de iOS, para dar soporte a una nueva version de Temporis, un juego móvil multijugador orientado a reordenar eventos históricos en una línea temporal (e.j. historia, arte, deportes, entretenimiento y literatura). Temporis actualmente está disponible en Google Play. Esta memoria describe el desarrollo de la nueva versión de Temporis (Temporis v.2.0) proporcionando detalles acerca de la mejora y adaptación basados en el Temporis original. En particular se describe el nuevo backend hecho en Go sobre Google App Engine creado para soportar miles de usuarios, asó como otras características por ejemplo como conseguir enviar noticaciones push desde la propia plataforma. Por último, el cliente de iOS en Temporis v.2.0 se ha desarrollado utilizando las últimas y más relevantes tecnologías, prestando especial atención a Swift (el lenguaje de programación nuevo de Apple, que es seguro y rápido), el Paradigma Funcional Reactivo (que ayuda a construir aplicaciones altamente interactivas además de a minimizar errores) y la arquitectura VIPER (una arquitectura que sigue los principios SOLID, se centra en la separación de asuntos y favorece la reutilización de código en otras plataformas). ABSTRACT Emerging technologies such as cloud computing and mobile devices are creating an unprecedented opportunity for enhancing the educational system, letting both educators customize and improve the learning experience, and students acquire knowledge regardless of where they are. Moreover, through gamification techniques it would be possible to encourage and motivate students to learn arduous subjects by making the experience more motivating. Mobile games can be a perfect vehicle to support this enhanced learning experience. This project integrates the design and development of a highly scalable and performant cloud architecture, as well as the iOS client that uses it, in order to provide support to a new version of Temporis, a mobile multiplayer game focused on ordering time-based (e.g. history, art, sports, entertainment and literature) in a timeline that currently is available on Google Play. This work describes the development of the new Temporis version (Temporis v.2.0), providing details about improvements and details on the adaptation of the original Temporis. In particular, the new Google App Engine backend is described, which was created to support thousand of users developed in Go language are provided, in addition to other features like how to achieve push notications in this platform. Finally, the mobile iOS client developed using the latest and more relevant technologies is explained paying special attention to Swift (Apple's new programming language, that is safe and fast), the Functional Reactive Paradigm (that helps building highly interactive apps while minimizing bugs) and the VIPER architecture (a SOLID architecture that enforces separation of concerns and makes it easy to reuse code for other platforms).
Resumo:
A necessidade de obter solução de grandes sistemas lineares resultantes de processos de discretização de equações diferenciais parciais provenientes da modelagem de diferentes fenômenos físicos conduz à busca de técnicas numéricas escaláveis. Métodos multigrid são classificados como algoritmos escaláveis.Um estimador de erros deve estar associado à solução numérica do problema discreto de modo a propiciar a adequada avaliação da solução obtida pelo processo de aproximação. Nesse contexto, a presente tese caracteriza-se pela proposta de reutilização das estruturas matriciais hierárquicas de operadores de transferência e restrição dos métodos multigrid algébricos para acelerar o tempo de solução dos sistemas lineares associados à equação do transporte de contaminantes em meio poroso saturado. Adicionalmente, caracteriza-se pela implementação das estimativas residuais para os problemas que envolvem dados constantes ou não constantes, os regimes de pequena ou grande advecção e pela proposta de utilização das estimativas residuais associadas ao termo de fonte e à condição inicial para construir procedimentos adaptativos para os dados do problema. O desenvolvimento dos códigos do método de elementos finitos, do estimador residual e dos procedimentos adaptativos foram baseados no projeto FEniCS, utilizando a linguagem de programação PYTHONR e desenvolvidos na plataforma Eclipse. A implementação dos métodos multigrid algébricos com reutilização considera a biblioteca PyAMG. Baseado na reutilização das estruturas hierárquicas, os métodos multigrid com reutilização com parâmetro fixo e automática são propostos, e esses conceitos são estendidos para os métodos iterativos não-estacionários tais como GMRES e BICGSTAB. Os resultados numéricos mostraram que o estimador residual captura o comportamento do erro real da solução numérica, e fornece algoritmos adaptativos para os dados cuja malha retornada produz uma solução numérica similar à uma malha uniforme com mais elementos. Adicionalmente, os métodos com reutilização são mais rápidos que os métodos que não empregam o processo de reutilização de estruturas. Além disso, a eficiência dos métodos com reutilização também pode ser observada na solução do problema auxiliar, o qual é necessário para obtenção das estimativas residuais para o regime de grande advecção. Esses resultados englobam tanto os métodos multigrid algébricos do tipo SA quanto os métodos pré-condicionados por métodos multigrid algébrico SA, e envolvem o transporte de contaminantes em regime de pequena e grande advecção, malhas estruturadas e não estruturadas, problemas bidimensionais, problemas tridimensionais e domínios com diferentes escalas.
Resumo:
A operação de veículos autônomos necessita de meios para evitar colisões quando obstáculos não conhecidos previamente são interpostos em sua trajetória. Algoritmos para executar o desvio e sensores apropriados para a detecção destes obstáculos são essenciais para a operação destes veículos. Esta dissertação apresenta estudos sobre quatro algoritmos de desvio de obstáculos e tecnologia de três tipos de sensores aplicáveis à operação de veículos autônomos. Após os estudos teóricos, um dos algoritmos foi testado para a comprovação da aplicabilidade ao veículo de teste. A etapa experimental foi a realização de um programa, escrito em linguagem de programação Java, que aplicou o algoritmo Inseto 2 para o desvio de obstáculos em uma plataforma robótica (Robodeck) com o uso de sensores ultrassônicos embarcados na referida plataforma. Os experimentos foram conduzidos em ambiente fechado (indoor), bidimensional e horizontal (plano), fazendo o Robodeck executar uma trajetória. Para os testes, obstáculos foram colocados para simular situações variadas e avaliar a eficácia do algoritmo nestas configurações de caminho. O algoritmo executou o desvio dos obstáculos com sucesso e, quando havia solução para a trajetória, ela foi encontrada. Quando não havia solução, o algoritmo detectou esta situação e parou o veículo.
Resumo:
Esta tese propõe um modelo de regeneração de energia metroviária, baseado no controle de paradas e partidas do trem ao longo de sua viagem, com o aproveitamento da energia proveniente da frenagem regenerativa no sistema de tração. O objetivo é otimizar o consumo de energia, promover maior eficiência, na perspectiva de uma gestão sustentável. Aplicando o Algoritmo Genético (GA) para obter a melhor configuração de tráfego dos trens, a pesquisa desenvolve e testa o Algoritmo de Controle de Tração para Regeneração de Energia Metroviária (ACTREM), usando a Linguagem de programação C++. Para analisar o desempenho do algoritmo de controle ACTREM no aumento da eficiência energética, foram realizadas quinze simulações da aplicação do ACTREM na linha 4 - Amarela do metrô da cidade de São Paulo. Essas simulações demonstraram a eficiência do ACTREM para gerar, automaticamente, os diagramas horários otimizados para uma economia de energia nos sistemas metroviários, levando em consideração as restrições operacionais do sistema, como capacidade máxima de cada trem, tempo total de espera, tempo total de viagem e intervalo entre trens. Os resultados mostram que o algoritmo proposto pode economizar 9,5% da energia e não provocar impactos relevantes na capacidade de transporte de passageiros do sistema. Ainda sugerem possíveis continuidades de estudos.
Resumo:
Transportation Department, Secretary of Transportation, Washington, D.C.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
The paper presents a computational system based upon formal principles to run spatial models for environmental processes. The simulator is named SimuMap because it is typically used to simulate spatial processes over a mapped representation of terrain. A model is formally represented in SimuMap as a set of coupled sub-models. The paper considers the situation where spatial processes operate at different time levels, but are still integrated. An example of such a situation commonly occurs in watershed hydrology where overland flow and stream channel flow have very different flow rates but are highly related as they are subject to the same terrain runoff processes. SimuMap is able to run a network of sub-models that express different time-space derivatives for water flow processes. Sub-models may be coded generically with a map algebra programming language that uses a surface data model. To address the problem of differing time levels in simulation, the paper: (i) reviews general approaches for numerical solvers, (ii) considers the constraints that need to be enforced to use more adaptive time steps in discrete time specified simulations, and (iii) scaling transfer rates in equations that use different time bases for time-space derivatives. A multistep scheme is proposed for SimuMap. This is presented along with a description of its visual programming interface, its modelling formalisms and future plans. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
There has been an abundance of literature on the modelling of hydrocyclones over the past 30 years. However, in the comminution area at least, the more popular commercially available packages (e.g. JKSimMet, Limn, MODSIM) use the models developed by Nageswararao and Plitt in the 1970s, either as published at that time, or with minor modification. With the benefit of 30 years of hindsight, this paper discusses the assumptions and approximations used in developing these models. Differences in model structure and the choice of dependent and independent variables are also considered. Redundancies are highlighted and an assessment made of the general applicability of each of the models, their limitations and the sources of error in their model predictions. This paper provides the latest version of the Nageswararao model based on the above analysis, in a form that can readily be implemented in any suitable programming language, or within a spreadsheet. The Plitt model is also presented in similar form. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Wurst is a protein threading program with an emphasis on high quality sequence to structure alignments (http://www.zbh.uni-hamburg.de/wurst). Submitted sequences are aligned to each of about 3000 templates with a conventional dynamic programming algorithm, but using a score function with sophisticated structure and sequence terms. The structure terms are a log-odds probability of sequence to structure fragment compatibility, obtained from a Bayesian classification procedure. A simplex optimization was used to optimize the sequence-based terms for the goal of alignment and model quality and to balance the sequence and structural contributions against each other. Both sequence and structural terms operate with sequence profiles.
Resumo:
The notion of being sure that you have completely eradicated an invasive species is fanciful because of imperfect detection and persistent seed banks. Eradication is commonly declared either on an ad hoc basis, on notions of seed bank longevity, or on setting arbitrary thresholds of 1% or 5% confidence that the species is not present. Rather than declaring eradication at some arbitrary level of confidence, we take an economic approach in which we stop looking when the expected costs outweigh the expected benefits. We develop theory that determines the number of years of absent surveys required to minimize the net expected cost. Given detection of a species is imperfect, the optimal stopping time is a trade-off between the cost of continued surveying and the cost of escape and damage if eradication is declared too soon. A simple rule of thumb compares well to the exact optimal solution using stochastic dynamic programming. Application of the approach to the eradication programme of Helenium amarum reveals that the actual stopping time was a precautionary one given the ranges for each parameter.
Resumo:
One of the most pressing issues facing the global conservation community is how to distribute limited resources between regions identified as priorities for biodiversity conservation(1-3). Approaches such as biodiversity hotspots(4), endemic bird areas(5) and ecoregions(6) are used by international organizations to prioritize conservation efforts globally(7). Although identifying priority regions is an important first step in solving this problem, it does not indicate how limited resources should be allocated between regions. Here we formulate how to allocate optimally conservation resources between regions identified as priorities for conservation - the 'conservation resource allocation problem'. Stochastic dynamic programming is used to find the optimal schedule of resource allocation for small problems but is intractable for large problems owing to the curse of dimensionality(8). We identify two easy- to- use and easy- to- interpret heuristics that closely approximate the optimal solution. We also show the importance of both correctly formulating the problem and using information on how investment returns change through time. Our conservation resource allocation approach can be applied at any spatial scale. We demonstrate the approach with an example of optimal resource allocation among five priority regions in Wallacea and Sundaland, the transition zone between Asia and Australasia.
Resumo:
Summary form only given. The Java programming language supports concurrency. Concurrent programs are harder to verify than their sequential counterparts due to their inherent nondeterminism and a number of specific concurrency problems such as interference and deadlock. In previous work, we proposed a method for verifying concurrent Java components based on a mix of code inspection, static analysis tools, and the ConAn testing tool. The method was derived from an analysis of concurrency failures in Java components, but was not applied in practice. In this paper, we explore the method by applying it to an implementation of the well-known readers-writers problem and a number of mutants of that implementation. We only apply it to a single, well-known example, and so we do not attempt to draw any general conclusions about the applicability or effectiveness of the method. However, the exploration does point out several strengths and weaknesses in the method, which enable us to fine-tune the method before we carry out a more formal evaluation on other, more realistic components.