869 resultados para traffic modelling and simulation. video processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this article is to apply the Design of Experiments technique along with the Discrete Events Simulation technique in an automotive process. The benefits of the design of experiments in simulation include the possibility to improve the performance in the simulation process, avoiding trial and error to seek solutions. The methodology of the conjoint use of Design of Experiments and Computer Simulation is presented to assess the effects of the variables and its interactions involved in the process. In this paper, the efficacy of the use of process mapping and design of experiments on the phases of conception and analysis are confirmed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Centralized and Distributed methods are two connection management schemes in wavelength convertible optical networks. In the earlier work, the centralized scheme is said to have lower network blocking probability than the distributed one. Hence, much of the previous work in connection management has focused on the comparison of different algorithms in only distributed scheme or in only centralized scheme. However, we believe that the network blocking probability of these two connection management schemes depends, to a great extent, on the network traffic patterns and reservation times. Our simulation results reveal that the performance improvement (in terms of blocking probability) of centralized method over distributed method is inversely proportional to the ratio of average connection interarrival time to reservation time. After that ratio increases beyond a threshold, those two connection management schemes yield almost the same blocking probability under the same network load. In this paper, we review the working procedure of distributed and centralized schemes, discuss the tradeoff between them, compare these two methods under different network traffic patterns via simulation and give our conclusion based on the simulation data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multi-scale synoptic circulation system in the southeastern Brazil (SEBRA) region is presented using a feature-oriented approach. Prevalent synoptic circulation structures, or ""features,"" are identified from previous observational studies. These features include the southward-flowing Brazil Current (BC), the eddies off Cabo Sao Tome (CST - 22 degrees S) and off Cabo Frio (CF - 23 degrees S), and the upwelling region off CF and CST. Their synoptic water-mass (T-S) structures are characterized and parameterized to develop temperature-salinity (T-S) feature models. Following [Gangopadhyay, A., Robinson, A.R., Haley, PJ., Leslie, W.J., Lozano, C.j., Bisagni, J., Yu, Z., 2003. Feature-oriented regional modeling and simulation (forms) in the gulf of maine and georges bank. Cont. Shelf Res. 23 (3-4), 317-353] methodology, a synoptic initialization scheme for feature-oriented regional modeling and simulation (FORMS) of the circulation in this region is then developed. First, the temperature and salinity feature-model profiles are placed on a regional circulation template and objectively analyzed with available background climatology in the deep region. These initialization fields are then used for dynamical simulations via the Princeton Ocean Model (POM). A few first applications of this methodology are presented in this paper. These include the BC meandering, the BC-eddy interaction and the meander-eddy-upwelling system (MEUS) simulations. Preliminary validation results include realistic wave-growth and eddy formation and sustained upwelling. Our future plan includes the application of these feature models with satellite, in-situ data and advanced data-assimilation schemes for nowcasting and forecasting the SEBRA region. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CONTEXT AND OBJECTIVE: Children and adolescents who live in situations of social vulnerability present a series of health problems. Nonetheless, affirmations that sensory and cognitive abnormalities are present are a matter of controversy. The aim of this study was to investigate aspects to auditory processing, through applying the brainstem auditory evoked potential (BAEP) and behavioral auditory processing tests to children living on the streets, and comparison with a control group. DESIGN AND SETTING: Cross-sectional study in the Laboratory of Auditory Processing, School of Medicine, Universidade de São Paulo. METHODS: The auditory processing tests were applied to a group of 27 individuals, subdivided into 11 children (7 to 10 years old) and 16 adolescents (11 to 16 years old), of both sexes, in situations of social vulnerability, compared with an age-matched control group of 10 children and 11 adolescents without complaints. The BAEP test was also applied to investigate the integrity of the auditory pathway. RESULTS: For both children and adolescents, there were significant differences between the study and control groups in most of the tests applied, with significantly worse performance in the study group, except in the pediatric speech intelligibility test. Only one child had an abnormal result in the BAEP test. CONCLUSIONS: The results showed that the study group (children and adolescents) presented poor performance in the behavioral auditory processing tests, despite their unaltered auditory brainstem pathways, as shown by their normal results in the BAEP test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kane’s formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The world of communication has changed quickly in the last decade resulting in the the rapid increase in the pace of peoples’ lives. This is due to the explosion of mobile communication and the internet which has now reached all levels of society. With such pressure for access to communication there is increased demand for bandwidth. Photonic technology is the right solution for high speed networks that have to supply wide bandwidth to new communication service providers. In particular this Ph.D. dissertation deals with DWDM optical packet-switched networks. The issue introduces a huge quantity of problems from physical layer up to transport layer. Here this subject is tackled from the network level perspective. The long term solution represented by optical packet switching has been fully explored in this years together with the Network Research Group at the department of Electronics, Computer Science and System of the University of Bologna. Some national as well as international projects supported this research like the Network of Excellence (NoE) e-Photon/ONe, funded by the European Commission in the Sixth Framework Programme and INTREPIDO project (End-to-end Traffic Engineering and Protection for IP over DWDM Optical Networks) funded by the Italian Ministry of Education, University and Scientific Research. Optical packet switching for DWDM networks is studied at single node level as well as at network level. In particular the techniques discussed are thought to be implemented for a long-haul transport network that connects local and metropolitan networks around the world. The main issues faced are contention resolution in a asynchronous variable packet length environment, adaptive routing, wavelength conversion and node architecture. Characteristics that a network must assure as quality of service and resilience are also explored at both node and network level. Results are mainly evaluated via simulation and through analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The knee joint is a key structure of the human locomotor system. The knowledge of how each single anatomical structure of the knee contributes to determine the physiological function of the knee, is of fundamental importance for the development of new prostheses and novel clinical, surgical, and rehabilitative procedures. In this context, a modelling approach is necessary to estimate the biomechanic function of each anatomical structure during daily living activities. The main aim of this study was to obtain a subject-specific model of the knee joint of a selected healthy subject. In particular, 3D models of the cruciate ligaments and of the tibio-femoral articular contact were proposed and developed using accurate bony geometries and kinematics reliably recorded by means of nuclear magnetic resonance and 3D video-fluoroscopy from the selected subject. Regarding the model of the cruciate ligaments, each ligament was modelled with 25 linear-elastic elements paying particular attention to the anatomical twisting of the fibres. The devised model was as subject-specific as possible. The geometrical parameters were directly estimated from the experimental measurements, whereas the only mechanical parameter of the model, the elastic modulus, had to be considered from the literature because of the invasiveness of the needed measurements. Thus, the developed model was employed for simulations of stability tests and during living activities. Physiologically meaningful results were always obtained. Nevertheless, the lack of subject-specific mechanical characterization induced to design and partially develop a novel experimental method to characterize the mechanics of the human cruciate ligaments in living healthy subjects. Moreover, using the same subject-specific data, the tibio-femoral articular interaction was modelled investigating the location of the contact point during the execution of daily motor tasks and the contact area at the full extension with and without the whole body weight of the subject. Two different approaches were implemented and their efficiency was evaluated. Thus, pros and cons of each approach were discussed in order to suggest future improvements of this methodologies. The final results of this study will contribute to produce useful methodologies for the investigation of the in-vivo function and pathology of the knee joint during the execution of daily living activities. Thus, the developed methodologies will be useful tools for the development of new prostheses, tools and procedures both in research field and in diagnostic, surgical and rehabilitative fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Für die Zukunft wird eine Zunahme an Verkehr prognostiziert, gleichzeitig herrscht ein Mangel an Raum und finanziellen Mitteln, um weitere Straßen zu bauen. Daher müssen die vorhandenen Kapazitäten durch eine bessere Verkehrssteuerung sinnvoller genutzt werden, z.B. durch Verkehrsleitsysteme. Dafür werden räumlich aufgelöste, d.h. den Verkehr in seiner flächenhaften Verteilung wiedergebende Daten benötigt, die jedoch fehlen. Bisher konnten Verkehrsdaten nur dort erhoben werden, wo sich örtlich feste Meßeinrichtungen befinden, jedoch können damit die fehlenden Daten nicht erhoben werden. Mit Fernerkundungssystemen ergibt sich die Möglichkeit, diese Daten flächendeckend mit einem Blick von oben zu erfassen. Nach jahrzehntelangen Erfahrungen mit Fernerkundungsmethoden zur Erfassung und Untersuchung der verschiedensten Phänomene auf der Erdoberfläche wird nun diese Methodik im Rahmen eines Pilotprojektes auf den Themenbereich Verkehr angewendet. Seit Ende der 1990er Jahre wurde mit flugzeuggetragenen optischen und Infrarot-Aufnahmesystemen Verkehr beobachtet. Doch bei schlechten Wetterbedingungen und insbesondere bei Bewölkung, sind keine brauchbaren Aufnahmen möglich. Mit einem abbildenden Radarverfahren werden Daten unabhängig von Wetter- und Tageslichtbedingungen oder Bewölkung erhoben. Im Rahmen dieser Arbeit wird untersucht, inwieweit mit Hilfe von flugzeuggetragenem synthetischem Apertur Radar (SAR) Verkehrsdaten aufgenommen, verarbeitet und sinnvoll angewendet werden können. Nicht nur wird die neue Technik der Along-Track Interferometrie (ATI) und die Prozessierung und Verarbeitung der aufgenommenen Verkehrsdaten ausführlich dargelegt, es wird darüberhinaus ein mit dieser Methodik erstellter Datensatz mit einer Verkehrssimulation verglichen und bewertet. Abschließend wird ein Ausblick auf zukünftige Entwicklungen der Radarfernerkundung zur Verkehrsdatenerfassung gegeben.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Many studies showing effects of traffic-related air pollution on health rely on self-reported exposure, which may be inaccurate. We estimated the association between self-reported exposure to road traffic and respiratory symptoms in preschool children, and investigated whether the effect could have been caused by reporting bias. METHODS: In a random sample of 8700 preschool children in Leicestershire, UK, exposure to road traffic and respiratory symptoms were assessed by a postal questionnaire (response rate 80%). The association between traffic exposure and respiratory outcomes was assessed using unconditional logistic regression and conditional regression models (matching by postcode). RESULTS: Prevalence odds ratios (95% confidence intervals) for self-reported road traffic exposure, comparing the categories 'moderate' and 'dense', respectively, with 'little or no' were for current wheezing: 1.26 (1.13-1.42) and 1.30 (1.09-1.55); chronic rhinitis: 1.18 (1.05-1.31) and 1.31 (1.11-1.56); night cough: 1.17 (1.04-1.32) and 1.36 (1.14-1.62); and bronchodilator use: 1.20 (1.04-1.38) and 1.18 (0.95-1.46). Matched analysis only comparing symptomatic and asymptomatic children living at the same postcode (thus exposed to similar road traffic) showed similar ORs, suggesting that parents of children with respiratory symptoms reported more road traffic than parents of asymptomatic children. CONCLUSIONS: Our study suggests that reporting bias could explain some or even all the association between reported exposure to road traffic and disease. Over-reporting of exposure by only 10% of parents of symptomatic children would be sufficient to produce the effect sizes shown in this study. Future research should be based only on objective measurements of traffic exposure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis develops an effective modeling and simulation procedure for a specific thermal energy storage system commonly used and recommended for various applications (such as an auxiliary energy storage system for solar heating based Rankine cycle power plant). This thermal energy storage system transfers heat from a hot fluid (termed as heat transfer fluid - HTF) flowing in a tube to the surrounding phase change material (PCM). Through unsteady melting or freezing process, the PCM absorbs or releases thermal energy in the form of latent heat. Both scientific and engineering information is obtained by the proposed first-principle based modeling and simulation procedure. On the scientific side, the approach accurately tracks the moving melt-front (modeled as a sharp liquid-solid interface) and provides all necessary information about the time-varying heat-flow rates, temperature profiles, stored thermal energy, etc. On the engineering side, the proposed approach is unique in its ability to accurately solve – both individually and collectively – all the conjugate unsteady heat transfer problems for each of the components of the thermal storage system. This yields critical system level information on the various time-varying effectiveness and efficiency parameters for the thermal storage system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.