886 resultados para Search-based technique
Resumo:
We describe a technique for interactive rendering of diffraction effects produced by biological nanostructures, such as snake skin surface gratings. Our approach uses imagery from atomic force microscopy that accurately captures the geometry of the nanostructures responsible for structural colouration, that is, colouration due to wave interference, in a variety of animals. We develop a rendering technique that constructs bidirectional reflection distribution functions (BRDFs) directly from the measured data and leverages pre-computation to achieve interactive performance. We demonstrate results of our approach using various shapes of the surface grating nanostructures. Finally, we evaluate the accuracy of our pre-computation-based technique and compare to a reference BRDF construction technique.
Resumo:
Asynchronous level crossing sampling analog-to-digital converters (ADCs) are known to be more energy efficient and produce fewer samples than their equidistantly sampling counterparts. However, as the required threshold voltage is lowered, the number of samples and, in turn, the data rate and the energy consumed by the overall system increases. In this paper, we present a cubic Hermitian vector-based technique for online compression of asynchronously sampled electrocardiogram signals. The proposed method is computationally efficient data compression. The algorithm has complexity O(n), thus well suited for asynchronous ADCs. Our algorithm requires no data buffering, maintaining the energy advantage of asynchronous ADCs. The proposed method of compression has a compression ratio of up to 90% with achievable percentage root-mean-square difference ratios as a low as 0.97. The algorithm preserves the superior feature-to-feature timing accuracy of asynchronously sampled signals. These advantages are achieved in a computationally efficient manner since algorithm boundary parameters for the signals are extracted a priori.
Resumo:
El municipio es considerado como un espacio donde sus habitantes comparten no sólo el territorio sino también los problemas y los recursos existentes. La institución municipal -como gobierno local- es el ámbito en el cual se toman decisiones sobre el territorio, que implican a sus habitantes. En cuanto a los actores, estos pueden ser funcionarios, empleados y la comunidad (individual y organizada en ongs), todos aportan sus conocimientos y valores, pero tienen diferentes intereses y diferentes tiempos. Vinculada a las decisiones, encontramos que la forma en que se gestiona la información territorial, es determinante si se pretende apuntar hacia acciones con impacto positivo, y sustentables en lo ambiental y en el tiempo. Este trabajo toma tres municipios: San Salvador de Jujuy, capital de la provincia localizada en los Valles Templados; San Pedro de Jujuy, principal municipio de la región de las Yungas y Tilcara en la Quebrada de Humahuaca. El aporte de la Inteligencia Territorial, a través del observatorio OIDTe, permite analizar los modos de gestión de la información, especialmente mediante el uso de las tecnologías de la información y comunicación (pagina web municipal, equipamiento informático en las oficinas, estrategias de comunicación y vinculación con la población) y mediante la organización de las estructuras administrativas (organigrama) por las cuales circula la información municipal. Además, con la participación enriquecedora de equipos multidisciplinarios en las diferentes etapas. Se busca, a partir de un diagnóstico, generar estrategias para la introducción de innovaciones con los propios actores municipales, a partir de las situaciones y modos culturales propios de cada lugar, incorporando los marcos conceptuales de la Inteligencia Territorial. En este sentido el OIDTe al promover el entendimiento entre los actores, institucionales y la sociedad, facilita la coordinación de diferentes intereses propiciando la toma de decisiones por acuerdos. Asimismo, el método Portulano, puede orientar la introducción de innovaciones en la coordinación de la información cartográfica, para que las diferentes oficinas puedan complementar sus aportes y la comunicación hacia fuera de la institución. En la fase de diagnóstico, se aplicaron entrevistas a informantes claves, se realizó un workshop con técnicos de planta permanente y funcionarios de áreas que manejan información territorial, y de planificación. También por la importancia de la capacidad instalada de recursos humanos, se analizó el nivel de instrucción y la capacitación con que cuenta el personal de planta permanente de cada área
Resumo:
El municipio es considerado como un espacio donde sus habitantes comparten no sólo el territorio sino también los problemas y los recursos existentes. La institución municipal -como gobierno local- es el ámbito en el cual se toman decisiones sobre el territorio, que implican a sus habitantes. En cuanto a los actores, estos pueden ser funcionarios, empleados y la comunidad (individual y organizada en ongs), todos aportan sus conocimientos y valores, pero tienen diferentes intereses y diferentes tiempos. Vinculada a las decisiones, encontramos que la forma en que se gestiona la información territorial, es determinante si se pretende apuntar hacia acciones con impacto positivo, y sustentables en lo ambiental y en el tiempo. Este trabajo toma tres municipios: San Salvador de Jujuy, capital de la provincia localizada en los Valles Templados; San Pedro de Jujuy, principal municipio de la región de las Yungas y Tilcara en la Quebrada de Humahuaca. El aporte de la Inteligencia Territorial, a través del observatorio OIDTe, permite analizar los modos de gestión de la información, especialmente mediante el uso de las tecnologías de la información y comunicación (pagina web municipal, equipamiento informático en las oficinas, estrategias de comunicación y vinculación con la población) y mediante la organización de las estructuras administrativas (organigrama) por las cuales circula la información municipal. Además, con la participación enriquecedora de equipos multidisciplinarios en las diferentes etapas. Se busca, a partir de un diagnóstico, generar estrategias para la introducción de innovaciones con los propios actores municipales, a partir de las situaciones y modos culturales propios de cada lugar, incorporando los marcos conceptuales de la Inteligencia Territorial. En este sentido el OIDTe al promover el entendimiento entre los actores, institucionales y la sociedad, facilita la coordinación de diferentes intereses propiciando la toma de decisiones por acuerdos. Asimismo, el método Portulano, puede orientar la introducción de innovaciones en la coordinación de la información cartográfica, para que las diferentes oficinas puedan complementar sus aportes y la comunicación hacia fuera de la institución. En la fase de diagnóstico, se aplicaron entrevistas a informantes claves, se realizó un workshop con técnicos de planta permanente y funcionarios de áreas que manejan información territorial, y de planificación. También por la importancia de la capacidad instalada de recursos humanos, se analizó el nivel de instrucción y la capacitación con que cuenta el personal de planta permanente de cada área
Resumo:
El municipio es considerado como un espacio donde sus habitantes comparten no sólo el territorio sino también los problemas y los recursos existentes. La institución municipal -como gobierno local- es el ámbito en el cual se toman decisiones sobre el territorio, que implican a sus habitantes. En cuanto a los actores, estos pueden ser funcionarios, empleados y la comunidad (individual y organizada en ongs), todos aportan sus conocimientos y valores, pero tienen diferentes intereses y diferentes tiempos. Vinculada a las decisiones, encontramos que la forma en que se gestiona la información territorial, es determinante si se pretende apuntar hacia acciones con impacto positivo, y sustentables en lo ambiental y en el tiempo. Este trabajo toma tres municipios: San Salvador de Jujuy, capital de la provincia localizada en los Valles Templados; San Pedro de Jujuy, principal municipio de la región de las Yungas y Tilcara en la Quebrada de Humahuaca. El aporte de la Inteligencia Territorial, a través del observatorio OIDTe, permite analizar los modos de gestión de la información, especialmente mediante el uso de las tecnologías de la información y comunicación (pagina web municipal, equipamiento informático en las oficinas, estrategias de comunicación y vinculación con la población) y mediante la organización de las estructuras administrativas (organigrama) por las cuales circula la información municipal. Además, con la participación enriquecedora de equipos multidisciplinarios en las diferentes etapas. Se busca, a partir de un diagnóstico, generar estrategias para la introducción de innovaciones con los propios actores municipales, a partir de las situaciones y modos culturales propios de cada lugar, incorporando los marcos conceptuales de la Inteligencia Territorial. En este sentido el OIDTe al promover el entendimiento entre los actores, institucionales y la sociedad, facilita la coordinación de diferentes intereses propiciando la toma de decisiones por acuerdos. Asimismo, el método Portulano, puede orientar la introducción de innovaciones en la coordinación de la información cartográfica, para que las diferentes oficinas puedan complementar sus aportes y la comunicación hacia fuera de la institución. En la fase de diagnóstico, se aplicaron entrevistas a informantes claves, se realizó un workshop con técnicos de planta permanente y funcionarios de áreas que manejan información territorial, y de planificación. También por la importancia de la capacidad instalada de recursos humanos, se analizó el nivel de instrucción y la capacitación con que cuenta el personal de planta permanente de cada área
Resumo:
La principal aportación de esta tesis doctoral ha sido la propuesta y evaluación de un sistema de traducción automática que permite la comunicación entre personas oyentes y sordas. Este sistema está formado a su vez por dos sistemas: un traductor de habla en español a Lengua de Signos Española (LSE) escrita y que posteriormente se representa mediante un agente animado; y un generador de habla en español a partir de una secuencia de signos escritos mediante glosas. El primero de ellos consta de un reconocedor de habla, un módulo de traducción entre lenguas y un agente animado que representa los signos en LSE. El segundo sistema está formado por una interfaz gráfica donde se puede especificar una secuencia de signos mediante glosas (palabras en mayúscula que representan los signos), un módulo de traducción entre lenguas y un conversor texto-habla. Para el desarrollo del sistema de traducción, en primer lugar se ha generado un corpus paralelo de 7696 frases en español con sus correspondientes traducciones a LSE. Estas frases pertenecen a cuatro dominios de aplicación distintos: la renovación del Documento Nacional de Identidad, la renovación del permiso de conducir, un servicio de información de autobuses urbanos y la recepción de un hotel. Además, se ha generado una base de datos con más de 1000 signos almacenados en cuatro sistemas distintos de signo-escritura. En segundo lugar, se ha desarrollado un módulo de traducción automática que integra dos técnicas de traducción con una estructura jerárquica: la primera basada en memoria y la segunda estadística. Además, se ha implementado un módulo de pre-procesamiento de las frases en español que, mediante su incorporación al módulo de traducción estadística, permite mejorar significativamente la tasa de traducción. En esta tesis también se ha mejorado la versión de la interfaz de traducción de LSE a habla. Por un lado, se han incorporado nuevas características que mejoran su usabilidad y, por otro, se ha integrado un traductor de lenguaje SMS (Short Message Service – Servicio de Mensajes Cortos) a español, que permite especificar la secuencia a traducir en lenguaje SMS, además de mediante una secuencia de glosas. El sistema de traducción propuesto se ha evaluado con usuarios reales en dos dominios de aplicación: un servicio de información de autobuses de la Empresa Municipal de Transportes de Madrid y la recepción del Hotel Intur Palacio San Martín de Madrid. En la evaluación estuvieron implicadas personas sordas y empleados de los dos servicios. Se extrajeron medidas objetivas (obtenidas por el sistema automáticamente) y subjetivas (mediante cuestionarios a los usuarios). Los resultados fueron muy positivos gracias a la opinión de los usuarios de la evaluación, que validaron el funcionamiento del sistema de traducción y dieron información valiosa para futuras líneas de trabajo. Por otro lado, tras la integración de cada uno de los módulos de los dos sistemas de traducción (habla-LSE y LSE-habla), los resultados de la evaluación y la experiencia adquirida en todo el proceso, una aportación importante de esta tesis doctoral es la propuesta de metodología de desarrollo de sistemas de traducción de habla a lengua de signos en los dos sentidos de la comunicación. En esta metodología se detallan los pasos a seguir para desarrollar el sistema de traducción para un nuevo dominio de aplicación. Además, la metodología describe cómo diseñar cada uno de los módulos del sistema para mejorar su flexibilidad, de manera que resulte más sencillo adaptar el sistema desarrollado a un nuevo dominio de aplicación. Finalmente, en esta tesis se analizan algunas técnicas para seleccionar las frases de un corpus paralelo fuera de dominio para entrenar el modelo de traducción cuando se quieren traducir frases de un nuevo dominio de aplicación; así como técnicas para seleccionar qué frases del nuevo dominio resultan más interesantes que traduzcan los expertos en LSE para entrenar el modelo de traducción. El objetivo es conseguir una buena tasa de traducción con la menor cantidad posible de frases. ABSTRACT The main contribution of this thesis has been the proposal and evaluation of an automatic translation system for improving the communication between hearing and deaf people. This system is made up of two systems: a Spanish into Spanish Sign Language (LSE – Lengua de Signos Española) translator and a Spanish generator from LSE sign sequences. The first one consists of a speech recognizer, a language translation module and an avatar that represents the sign sequence. The second one is made up an interface for specifying the sign sequence, a language translation module and a text-to-speech conversor. For the translation system development, firstly, a parallel corpus has been generated with 7,696 Spanish sentences and their LSE translations. These sentences are related to four different application domains: the renewal of the Identity Document, the renewal of the driver license, a bus information service and a hotel reception. Moreover, a sign database has been generated with more than 1,000 signs described in four different signwriting systems. Secondly, it has been developed an automatic translation module that integrates two translation techniques in a hierarchical structure: the first one is a memory-based technique and the second one is statistical. Furthermore, a pre processing module for the Spanish sentences has been implemented. By incorporating this pre processing module into the statistical translation module, the accuracy of the translation module improves significantly. In this thesis, the LSE into speech translation interface has been improved. On the one hand, new characteristics that improve its usability have been incorporated and, on the other hand, a SMS language into Spanish translator has been integrated, that lets specifying in SMS language the sequence to translate, besides by specifying a sign sequence. The proposed translation system has been evaluated in two application domains: a bus information service of the Empresa Municipal de Transportes of Madrid and the Hotel Intur Palacio San Martín reception. This evaluation has involved both deaf people and services employees. Objective measurements (given automatically by the system) and subjective measurements (given by user questionnaires) were extracted during the evaluation. Results have been very positive, thanks to the user opinions during the evaluation that validated the system performance and gave important information for future work. Finally, after the integration of each module of the two translation systems (speech- LSE and LSE-speech), obtaining the evaluation results and considering the experience throughout the process, a methodology for developing speech into sign language (and vice versa) into a new domain has been proposed in this thesis. This methodology includes the steps to follow for developing the translation system in a new application domain. Moreover, this methodology proposes the way to improve the flexibility of each system module, so that the adaptation of the system to a new application domain can be easier. On the other hand, some techniques are analyzed for selecting the out-of-domain parallel corpus sentences in order to train the translation module in a new domain; as well as techniques for selecting which in-domain sentences are more interesting for translating them (by LSE experts) in order to train the translation model.
Resumo:
Integrity assurance of configuration data has a significant impact on microcontroller-based systems reliability. This is especially true when running applications driven by events which behavior is tightly coupled to this kind of data. This work proposes a new hybrid technique that combines hardware and software resources for detecting and recovering soft-errors in system configuration data. Our approach is based on the utilization of a common built-in microcontroller resource (timer) that works jointly with a software-based technique, which is responsible to periodically refresh the configuration data. The experiments demonstrate that non-destructive single event effects can be effectively mitigated with reduced overheads. Results show an important increase in fault coverage for SEUs and SETs, about one order of magnitude.
Resumo:
Adult neural progenitors have been isolated from diverse regions of the CNS using methods which primarily involve the enzymatic digestion of tissue pieces; however, interpretation of these experiments can be complicated by the loss of anatomical resolution during the isolation procedures. We have developed a novel, explant-based technique for the isolation of neural progenitors, Living CNS regions were sectioned using a vibratome and small, well-defined discs of tissue punched out. When Cultured. explants from the cortex, hippocampus, cerebellum, spinal cord, hypothalamus, and caudate nucleus all robustly gave rise to proliferating progenitors. These progenitors were similar in behaviour and morphology to previously characterised multipotent hippocampal progenitor lines. Clones from all regions examined could proliferate from single cells and give rise to secondary neurospheres at a low but consistent frequency. Immunostaining demonstrated that clonal cortical progenitors were able to differentiate into both neurons and glial cells, indicating their multipotent characteristics. These results demonstrate it is possible to isolate anatomically resolved adult neural progenitors from small amounts of tissue throughout the CNS, thus, providing a tool for investigating the frequency and characteristics of progenitor cells from different regions. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Resumo:
Nonlinear, non-stationary signals are commonly found in a variety of disciplines such as biology, medicine, geology and financial modeling. The complexity (e.g. nonlinearity and non-stationarity) of such signals and their low signal to noise ratios often make it a challenging task to use them in critical applications. In this paper we propose a new neural network based technique to address those problems. We show that a feed forward, multi-layered neural network can conveniently capture the states of a nonlinear system in its connection weight-space, after a process of supervised training. The performance of the proposed method is investigated via computer simulations.
Resumo:
ProxiMAX randomisation achieves saturation mutagenesis of contiguous codons without degeneracy or bias. Offering an alternative to trinucleotide phosphoramidite chemistry, it uses nothing more sophisticated than unmodified oligonucleotides and standard molecular biology reagents and as such, requires no specialised chemistry, reagents nor equipment. When particular residues are known to affect protein activity/specificity, their combinatorial replacement with all 20 amino acids, or a subset thereof, can provide a rapid route to generating proteins with desirable characteristics. Conventionally, saturation mutagenesis replaced key codons with degenerate ones. Although simple to perform, that procedure resulted in unnecessarily large libraries, termination codons and inherent uneven amino acid representation. ProxiMAX randomisation is an enzyme-based technique that can encode unbiased representation of all or selected amino acids or else can provide required codons in pre-defined ratios. Each saturated position can be defined independently of the others. ProxiMAX randomisation is achieved via saturation cycling: an iterative process comprising blunt end ligation, amplification and digestion with a Type IIS restriction enzyme. We demonstrate both unbiased saturation of a short 6-mer peptide and saturation of a hypervariable region of a scfv antibody fragment, where 11 contiguous codons are saturated with selected codons, in pre-defined ratios. As such, ProxiMAX randomisation is particularly relevant to antibody engineering. The development of ProxiMAX randomisation from concept to reality is described.
Resumo:
This thesis addressed the problem of risk analysis in mental healthcare, with respect to the GRiST project at Aston University. That project provides a risk-screening tool based on the knowledge of 46 experts, captured as mind maps that describe relationships between risks and patterns of behavioural cues. Mind mapping, though, fails to impose control over content, and is not considered to formally represent knowledge. In contrast, this thesis treated GRiSTs mind maps as a rich knowledge base in need of refinement; that process drew on existing techniques for designing databases and knowledge bases. Identifying well-defined mind map concepts, though, was hindered by spelling mistakes, and by ambiguity and lack of coverage in the tools used for researching words. A novel use of the Edit Distance overcame those problems, by assessing similarities between mind map texts, and between spelling mistakes and suggested corrections. That algorithm further identified stems, the shortest text string found in related word-forms. As opposed to existing approaches’ reliance on built-in linguistic knowledge, this thesis devised a novel, more flexible text-based technique. An additional tool, Correspondence Analysis, found patterns in word usage that allowed machines to determine likely intended meanings for ambiguous words. Correspondence Analysis further produced clusters of related concepts, which in turn drove the automatic generation of novel mind maps. Such maps underpinned adjuncts to the mind mapping software used by GRiST; one such new facility generated novel mind maps, to reflect the collected expert knowledge on any specified concept. Mind maps from GRiST are stored as XML, which suggested storing them in an XML database. In fact, the entire approach here is ”XML-centric”, in that all stages rely on XML as far as possible. A XML-based query language allows user to retrieve information from the mind map knowledge base. The approach, it was concluded, will prove valuable to mind mapping in general, and to detecting patterns in any type of digital information.
Resumo:
Five models delineating the person-situation fit controversy were developed and tested. Hypotheses were tested to determine the linkages between vision congruence, empowerment, locus of control, job satisfaction, organizational commitment, and employee performance. Vision was defined as a mental image of a possible and desirable future state of the organization.^ Data were collected from 213 employees in a major flower import company. Participants were from various organizational levels and ethnic backgrounds. The data collection procedure consisted of three parts. First, a profile analysis instrument was used which was developed employing a Q-sort based technique, to measure the vision congruence between the CEO and each employee. Second, employees completed a survey instrument which included scales measuring empowerment, locus of control, job satisfaction, organizational commitment, and social desirability. Third, supervisor performance ratings were gathered from employee files. Data analysis consisted of using Kendall's tau to measure the correlation between CEO's and each employee's vision. Path analyses were conducted using the EQS structural equation program to test five theoretical models for goodness-of-fit. Regression analysis was employed to test whether locus of control acted as a moderator variable.^ The results showed that vision congruence is significantly related to job satisfaction and employee commitment, and perceived empowerment acts as an intervening variable affecting employee outcomes. The study also found that people with an internal locus of control were more likely to feel empowered than were those with external beliefs. Implications of these findings for both researchers and practitioners are discussed and suggestions for future research directions are provided. ^
Resumo:
Distributed applications are exposed as reusable components that are dynamically discovered and integrated to create new applications. These new applications, in the form of aggregate services, are vulnerable to failure due to the autonomous and distributed nature of their integrated components. This vulnerability creates the need for adaptability in aggregate services. The need for adaptation is accentuated for complex long-running applications as is found in scientific Grid computing, where distributed computing nodes may participate to solve computation and data-intensive problems. Such applications integrate services for coordinated problem solving in areas such as Bioinformatics. For such applications, when a constituent service fails, the application fails, even though there are other nodes that can substitute for the failed service. This concern is not addressed in the specification of high-level composition languages such as that of the Business Process Execution Language (BPEL). We propose an approach to transparently autonomizing existing BPEL processes in order to make them modifiable at runtime and more resilient to the failures in their execution environment. By transparent introduction of adaptive behavior, adaptation preserves the original business logic of the aggregate service and does not tangle the code for adaptive behavior with that of the aggregate service. The major contributions of this dissertation are: first, we assessed the effectiveness of BPEL language support in developing adaptive mechanisms. As a result, we identified the strengths and limitations of BPEL and came up with strategies to address those limitations. Second, we developed a technique to enhance existing BPEL processes transparently in order to support dynamic adaptation. We proposed a framework which uses transparent shaping and generative programming to make BPEL processes adaptive. Third, we developed a technique to dynamically discover and bind to substitute services. Our technique was evaluated and the result showed that dynamic utilization of components improves the flexibility of adaptive BPEL processes. Fourth, we developed an extensible policy-based technique to specify how to handle exceptional behavior. We developed a generic component that introduces adaptive behavior for multiple BPEL processes. Fifth, we identify ways to apply our work to facilitate adaptability in composite Grid services.
Resumo:
The promise of Wireless Sensor Networks (WSNs) is the autonomous collaboration of a collection of sensors to accomplish some specific goals which a single sensor cannot offer. Basically, sensor networking serves a range of applications by providing the raw data as fundamentals for further analyses and actions. The imprecision of the collected data could tremendously mislead the decision-making process of sensor-based applications, resulting in an ineffectiveness or failure of the application objectives. Due to inherent WSN characteristics normally spoiling the raw sensor readings, many research efforts attempt to improve the accuracy of the corrupted or "dirty" sensor data. The dirty data need to be cleaned or corrected. However, the developed data cleaning solutions restrict themselves to the scope of static WSNs where deployed sensors would rarely move during the operation. Nowadays, many emerging applications relying on WSNs need the sensor mobility to enhance the application efficiency and usage flexibility. The location of deployed sensors needs to be dynamic. Also, each sensor would independently function and contribute its resources. Sensors equipped with vehicles for monitoring the traffic condition could be depicted as one of the prospective examples. The sensor mobility causes a transient in network topology and correlation among sensor streams. Based on static relationships among sensors, the existing methods for cleaning sensor data in static WSNs are invalid in such mobile scenarios. Therefore, a solution of data cleaning that considers the sensor movements is actively needed. This dissertation aims to improve the quality of sensor data by considering the consequences of various trajectory relationships of autonomous mobile sensors in the system. First of all, we address the dynamic network topology due to sensor mobility. The concept of virtual sensor is presented and used for spatio-temporal selection of neighboring sensors to help in cleaning sensor data streams. This method is one of the first methods to clean data in mobile sensor environments. We also study the mobility pattern of moving sensors relative to boundaries of sub-areas of interest. We developed a belief-based analysis to determine the reliable sets of neighboring sensors to improve the cleaning performance, especially when node density is relatively low. Finally, we design a novel sketch-based technique to clean data from internal sensors where spatio-temporal relationships among sensors cannot lead to the data correlations among sensor streams.