932 resultados para Interoperability of Applications
Resumo:
Android is becoming ubiquitous and currently has the largest share of the mobile OS market with billions of application downloads from the official app market. It has also become the platform most targeted by mobile malware that are becoming more sophisticated to evade state-of-the-art detection approaches. Many Android malware families employ obfuscation techniques in order to avoid detection and this may defeat static analysis based approaches. Dynamic analysis on the other hand may be used to overcome this limitation. Hence in this paper we propose DynaLog, a dynamic analysis based framework for characterizing Android applications. The framework provides the capability to analyse the behaviour of applications based on an extensive number of dynamic features. It provides an automated platform for mass analysis and characterization of apps that is useful for quickly identifying and isolating malicious applications. The DynaLog framework leverages existing open source tools to extract and log high level behaviours, API calls, and critical events that can be used to explore the characteristics of an application, thus providing an extensible dynamic analysis platform for detecting Android malware. DynaLog is evaluated using real malware samples and clean applications demonstrating its capabilities for effective analysis and detection of malicious applications.
Resumo:
Boron-doped diamond is a promising electrode material for a number of applications providing efficient carrier transport, a high stability of the electrolytic performance with time, a possibility for dye-sensitizing with photosensitive molecules, etc. It can be functionalized with electron donor molecules, like phthalocyanines or porphyrins, for the development of light energy conversion systems. For effective attachment of such molecules, the diamond surface has to be modified by plasma- or photo-chemical processes in order to achieve a desired surface termination. In the present work, the surface modifications of undoped and boron-doped nanocrystalline diamond (NCD) films and their functionalization with various phthalocyanines (Pcs) were investigated. The NCD films have been prepared by hot filament chemical vapor deposition (HFCVD) on silicon substrates and were thereafter subjected to modifications with O2 or NH3 plasmas or UV/O3 treatments for exchange of the H-termination of the as-grown surface. The effectiveness of the modifications and their stability with time during storage under different ambients were studied by contact angle measurements and X-ray photoelectron spectroscopy (XPS). Furthermore, the surface roughness after the modifications was investigated with atomic force microscopy (AFM) and compared to that of as-grown samples in order to establish the appearance of etching of the surface during the treatment. The as-grown and the modified NCD surfaces were exposed to phthalocyanines with different metal centers (Ti, Cu, Mn) or with different side chains. The results of the Pc grafting were investigated by XPS and Raman spectroscopy. XPS revealed the presence of nitrogen stemming from the Pc molecules and traces of the respective metal atoms with ratios close to those in the applied Pc. In a next step Raman spectra of Ti-Pc, Cu-Pc and Mn-Pc were obtained with two different excitation wavelengths (488 and 785 nm) from droplet samples on Si after evaporation of the solvent in order to establish their Raman fingerprints. The major differences in the spectra were assigned to the effect of the size of the metal ion on the structure of the phthalocyanine ring. The spectra obtained were used as references for the Raman spectra of NCD surfaces grafted with Pc. Finally, selected boron doped NCD samples were used after their surface modification and functionalization with Pc for the preparation of electrodes which were tested in a photoelectrochemical cell with a Pt counter electrode and an Ag/AgCl reference electrode. The light sources and electrolytes were varied to establish their influence on the performance of the dye-sensitized diamond electrodes. Cyclic voltammetry measurements revealed broad electrochemical potential window and high stability of the electrodes after several cycles. The open circuit potential (OCP) measurements performed in dark and after illumination showed fast responses of the electrodes to the illumination resulting in photocurrent generation.
Resumo:
In recent years, nanoscience and nanotechnology has emerged as one of the most important and exciting frontier areas of research interest in almost all fields of science and technology. This technology provides the path of many breakthrough changes in the near future in many areas of advanced technological applications. Nanotechnology is an interdisciplinary area of research and development. The advent of nanotechnology in the modern times and the beginning of its systematic study can be thought of to have begun with a lecture by the famous physicist Richard Feynman. In 1960 he presented a visionary and prophetic lecture at the meeting of the American Physical Society entitled “there is plenty of room at the bottom” where he speculated on the possibility and potential of nanosized materials. Synthesis of nanomaterials and nanostructures are the essential aspects of nanotechnology. Studies on new physical properties and applications of nanomaterials are possible only when materials are made available with desired size, morphology, crystal structure and chemical composition. Cerium oxide (ceria) is one of the important functional materials with high mechanical strength, thermal stability, excellent optical properties, appreciable oxygen ion conductivity and oxygen storage capacity. Ceria finds a variety of applications in mechanical polishing of microelectronic devices, as catalysts for three-way automatic exhaust systems and as additives in ceramics and phosphors. The doped ceria usually has enhanced catalytic and electrical properties, which depend on a series of factors such as the particle size, the structural characteristics, morphology etc. Ceria based solid solutions have been widely identified as promising electrolytes for intermediate temperature solid oxide fuel cells (SOFC). The success of many promising device technologies depends on the suitable powder synthesis techniques. The challenge for introducing new nanopowder synthesis techniques is to preserve high material quality while attaining the desired composition. The method adopted should give reproducible powder properties, high yield and must be time and energy effective. The use of a variety of new materials in many technological applications has been realized through the use of thin films of these materials. Thus the development of any new material will have good application potential if it can be deposited in thin film form with the same properties. The advantageous properties of thin films include the possibility of tailoring the properties according to film thickness, small mass of the materials involved and high surface to volume ratio. The synthesis of polymer nanocomposites is an integral aspect of polymer nanotechnology. By inserting the nanometric inorganic compounds, the properties of polymers can be improved and this has a lot of applications depending upon the inorganic filler material present in the polymer.
Resumo:
Gold nanoparticles functionalized with thiolated oligonucleotides (Au-nanoprobes) have been used in a range of applications for the detection of bioanalytes of interest, from ions to proteins and DNA targets. These detection strategies are based on the unique optical properties of gold nanoparticles, in particular, the intense color that is subject to modulation by modification of the medium dieletric. Au-nanoprobes have been applied for the detection and characterization of specific DNA sequences of interest, namely pathogens and disease biomarkers. Nevertheless, despite its relevance, only a few reports exist on the detection of RNA targets. Among these strategies, the colorimetric detection of DNA has been proven to work for several different targets in controlled samples but demonstration in real clinical bioanalysis has been elusive. Here, we used a colorimetric method based on Au-nanoprobes for the direct detection of the e14a2 BCR-ABL fusion transcript in myeloid leukemia patient samples without the need for retro-transcription. Au-nanoprobes directly assessed total RNA from 38 clinical samples, and results were validated against reverse transcription-nested polymerase chain reaction (RT-nested PCR) and reverse transcription-quantitative polymerase chain reaction (RT-qPCR). The colorimetric Au-nanoprobe assay is a simple yet reliable strategy to scrutinize myeloid leukemia patients at diagnosis and evaluate progression, with obvious advantages in terms of time and cost, particularly in low- to medium-income countries where molecular screening is not routinely feasible. Graphical abstract Gold nanoprobe for colorimetric detection of BCR-ABL1 fusion transcripts originating from the Philadelphia chromosome.
Resumo:
Carbon materials are found versatile and applicable in wide range of applications. During the recent years research of carbon materials has focussed on the search of environmentally friendly, sustainable, renewable and low-cost starting material sources as well as simple cost-efficient synthesis techniques. As an alternative synthesis technique in the production of carbon materials hydrothermal carbonization (HTC) has shown a great potential. Depending on the application HTC can be performed as such or as a pretreatment technique. This technique allows synthesis of carbon materials i.e. hydrochars in closed vessel in the presence of water and self-generated pressure at relatively low temperatures (180-250 ˚C). As in many applications well developed porosity and heteroatom distribution are in a key role. Therefore in this study different techniques e.g. varying feedstock, templating and post-treatment in order to introduce these properties to the hydrochars structure were performed. Simple monosaccharides i.e. fructose or glucose and more complex compounds such as cellulose and sludge were performed as starting materials. Addition of secondary precursor e.g. thiophenecarboxaldehyde and ovalbumin was successfully exploited in order to alter heteroatom content. It was shown that well-developed porosity (SBET 550 m2/g) can be achieved via one-pot approach (i.e. exploitation of salt mixture) without conventionally used post-carbonization step. Nitrogen-enriched hydrochars indicated significant Pb(II) and Cr(VI) removal efficiency of 240 mg/g and 68 mg/g respectively. Sulphur addition into carbon network was not found to have enhancing effect on the adsorption of methylene blue or change acidity of the carbon material. However, these hydrochars were found to remove 99.9 % methylene blue and adsorption efficiency of these hydrochars remained over 90 % even after regeneration. In addition to water treatment application N-rich high temperature treated carbon materials were proven applicable as electrocatalyst and electrocatalyst support. Hydrothermal carbonization was shown to be workable technique for the production of carbon materials with variable physico-chemical properties and therefore hydrochars could be applied in several different applications e.g. as alternative low-cost adsorbent for pollutant removal from water.
Resumo:
Only recently, during the past five years, consumer electronics has been evolving rapidly. Many products have started to include “smart home” capabilities, enabling communication and interoperability of various smart devices. Even more devices and sensors can be remote controlled and monitored through cloud services. While the smart home systems have become very affordable to average consumer compared to the early solutions decades ago, there are still many issues and things that need to be fixed or improved upon: energy efficiency, connectivity with other devices and applications, security and privacy concerns, reliability, and response time. This paper focuses on designing Internet of Things (IoT) node and platform architectures that take these issues into account, notes other currently used solutions, and selects technologies in order to provide better solution. The node architecture aims for energy efficiency and modularity, while the platform architecture goals are in scalability, portability, maintainability, performance, and modularity. Moreover, the platform architecture attempts to improve user experience by providing higher reliability and lower response time compared to the alternative platforms. The architectures were developed iteratively using a development process involving research, planning, design, implementation, testing, and analysis. Additionally, they were documented using Kruchten’s 4+1 view model, which is used to describe the use cases and different views of the architectures. The node architecture consisted of energy efficient hardware, FC3180 microprocessor and CC2520 RF transceiver, modular operating system, Contiki, and a communication protocol, AllJoyn, used for providing better interoperability with other IoT devices and applications. The platform architecture provided reliable low response time control, monitoring, and initial setup capabilities by utilizing web technologies on various devices such as smart phones, tablets, and computers. Furthermore, an optional cloud service was provided in order to control devices and monitor sensors remotely by utilizing scalable high performance technologies in the backend enabling low response time and high reliability.
Resumo:
In this study the relationship between heterogeneous nucleate boiling surfaces and deposition of suspended metallic colloidal particles, popularly known as crud or corrosion products in process industries, on those heterogeneous sites is investigated. Various researchers have reported that hematite is a major constituent of crud which makes it the primary material of interest; however the models developed in this work are irrespective of material choice. Qualitative hypotheses on the deposition process under boiling as proposed by previous researchers have been tested, which fail to provide explanations for several physical mechanisms observed and analyzed. In this study a quantitative model of deposition rate has been developed on the basis of bubble dynamics and colloid-surface interaction potential. Boiling from a heating surface aids in aggregation of the metallic particulates viz. nano-particles, crud particulate, etc. suspended in a liquid, which helps in transporting them to heating surfaces. Consequently, clusters of particles deposit onto the heating surfaces due to various interactive forces, resulting in formation of porous or impervious layers. The deposit layer grows or recedes depending upon variations in interparticle and surface forces, fluid shear, fluid chemistry, etc. This deposit layer in turn affects the rate of bubble generation, formation of porous chimneys, critical heat flux (CHF) of surfaces, activation and deactivation of nucleation sites on the heating surfaces. Several problems are posed due to the effect of boiling on colloidal deposition, which range from research initiatives involving nano-fluids as a heat transfer medium to industrial applications such as light water nuclear reactors. In this study, it is attempted to integrate colloid and surface science with vapor bubble dynamics, boiling heat transfer and evaporation rate. Pool boiling experiments with dilute metallic colloids have been conducted to investigate several parameters impacting the system. The experimental data available in the literature is obtained by flow experiments, which do not help in correlating boiling mechanism with the deposition amount or structure. With the help of experimental evidences and analysis, previously proposed hypothesis for particle transport to the contact line due to hydrophobicity has been challenged. The experimental observations suggest that deposition occurs around the bubble surface contact line and extends underneath area of the bubble microlayer as well. During the evaporation the concentration gradient of a non-volatile species is created, which induces osmotic pressure. The osmotic pressure developed inside the microlayer draws more particles inside the microlayer region or towards contact line. The colloidal escape time is slower than the evaporation time, which leads to the aggregation of particles in the evaporating micro-layer. These aggregated particles deposit onto or are removed from the heating surface, depending upon their total interaction potential. Interaction potential has been computed with the help of surface charge and van der Waals potential for the materials in aqueous solutions. Based upon the interaction-force boundary layer thickness, which is governed by debye radius (or ionic concentration and pH), a simplified quantitative model for the attachment kinetics is proposed. This attachment kinetics model gives reasonable results in predicting attachment rate against data reported by previous researchers. The attachment kinetics study has been done for different pH levels and particle sizes for hematite particles. Quantification of colloidal transport under boiling scenarios is done with the help of overall average evaporation rates because generally waiting times for bubbles at the same position is much larger than growth times. In other words, from a larger measurable scale perspective, frequency of bubbles dictates the rate of collection of particles rather than evaporation rate during micro-layer evaporation of one bubble. The combination of attachment kinetics and colloidal transport kinetics has been used to make a consolidated model for prediction of the amount of deposition and is validated with the help of high fidelity experimental data. In an attempt to understand and explain boiling characteristics, high speed visualization of bubble dynamics from a single artificial large cavity and multiple naturally occurring cavities is conducted. A bubble growth and departure dynamics model is developed for artificial active sites and is validated with the experimental data. The variation of bubble departure diameter with wall temperature is analyzed with experimental results and shows coherence with earlier studies. However, deposit traces after boiling experiments show that bubble contact diameter is essential to predict bubble departure dynamics, which has been ignored previously by various researchers. The relationship between porosity of colloid deposits and bubbles under the influence of Jakob number, sub-cooling and particle size has been developed. This also can be further utilized in variational wettability of the surface. Designing porous surfaces can having vast range of applications varying from high wettability, such as high critical heat flux boilers, to low wettability, such as efficient condensers.
Resumo:
Scientific applications rely heavily on floating point data types. Floating point operations are complex and require complicated hardware that is both area and power intensive. The emergence of massively parallel architectures like Rigel creates new challenges and poses new questions with respect to floating point support. The massively parallel aspect of Rigel places great emphasis on area efficient, low power designs. At the same time, Rigel is a general purpose accelerator and must provide high performance for a wide class of applications. This thesis presents an analysis of various floating point unit (FPU) components with respect to Rigel, and attempts to present a candidate design of an FPU that balances performance, area, and power and is suitable for massively parallel architectures like Rigel.
Resumo:
Background: Nanotechnologies are developing very rapidly and nanomaterials (NMs) are increasingly being used in a wide range of applications in science, industry and biomedicine.
Resumo:
International audience
Resumo:
In the past decades the growing application of nanomaterials (NMs) in diverse consumer products has raised various concerns in the field of toxicology. They have been extensively used in a broad range of applications and cover most of the industrial sectors as well as the medicine and the environmental areas. The most common scenarios for human exposure to NMs are occupational, environmental and as consumers and inhalation is the most frequent route of exposure, especially in occupational settings. Cerium dioxide NMs (nano-CeO2) are widely used in a number of applications such as in cosmetics, outdoor paints, wood care products as well as fuel catalysts. For such reason, nano-CeO2 is one of the selected NMs for priority testing within the sponsorship program of the Working Party of Manufactured Nanomaterials of the OECD. In this context, the aim of this study is to assess the safety of nano-CeO2 (NM-212, Joint Research Center Repository) through the characterization of its cytotoxicity and genotoxicity in a human alveolar epithelial cell line. A dispersion of the NM in water plus 0.05% BSA was prepared and sonicated during 16 minutes, according to a standardized protocol. DLS analysis was used to characterize the quality of the NM dispersion in the culture medium. To evaluate the cytotoxicity of nano-CeO2 in the A549 cell line, the colorimetric MTT assay was performed; the capacity of cells to proliferate when exposed to CeO2 was also assessed with the Clonogenic assay. The genotoxicity of this NM was evaluated by the Comet Assay (3 and 24h of exposure) to quantify DNA breaks and the FPG-modified comet assay to assess oxidative DNA damage. The Cytokinesis-Block Micronucleus (CBMN) assay was used to further detect chromosome breaks or loss. The nano-CeO2 particles are spherical, displaying a diameter of 33 nm and 28 m2/g of surface area. The results of the MTT assay did not show any decreased in cells viability following treatment with a dose-range of nano-CeO2 during 24h. Nevertheless, the highest concentrations of this NM were able to significantly reduce the colony forming ability of A549 cells, suggesting that a prolonged exposure may be cytotoxic to these cells. Data from both genotoxicity assays revealed that nano-CeO2 was neither able to induce DNA breaks nor oxidative DNA damage. Likewise, no significant micronucleus induction was observed. Taken together, the present results indicate that this nano-CeO2 is not genotoxic in this alveolar cell line under the tested conditions, although further studies should be performed, e.g., gene mutation in somatic cells and in vivo chromosome damage (rodent micronucleus assay) to ensure its safety to human health.
Resumo:
My thesis consists of three essays that investigate strategic interactions between individuals engaging in risky collective action in uncertain environments. The first essay analyzes a broad class of incomplete information coordination games with a wide range of applications in economics and politics. The second essay draws from the general model developed in the first essay to study decisions by individuals of whether to engage in protest/revolution/coup/strike. The final essay explicitly integrates state response to the analysis. The first essay, Coordination Games with Strategic Delegation of Pivotality, exhaustively analyzes a class of binary action, two-player coordination games in which players receive stochastic payoffs only if both players take a ``stochastic-coordination action''. Players receive conditionally-independent noisy private signals about the normally distributed stochastic payoffs. With this structure, each player can exploit the information contained in the other player's action only when he takes the “pivotalizing action”. This feature has two consequences: (1) When the fear of miscoordination is not too large, in order to utilize the other player's information, each player takes the “pivotalizing action” more often than he would based solely on his private information, and (2) best responses feature both strategic complementarities and strategic substitutes, implying that the game is not supermodular nor a typical global game. This class of games has applications in a wide range of economic and political phenomena, including war and peace, protest/revolution/coup/ strike, interest groups lobbying, international trade, and adoption of a new technology. My second essay, Collective Action with Uncertain Payoffs, studies the decision problem of citizens who must decide whether to submit to the status quo or mount a revolution. If they coordinate, they can overthrow the status quo. Otherwise, the status quo is preserved and participants in a failed revolution are punished. Citizens face two types of uncertainty. (a) non-strategic: they are uncertain about the relative payoffs of the status quo and revolution, (b) strategic: they are uncertain about each other's assessments of the relative payoff. I draw on the existing literature and historical evidence to argue that the uncertainty in the payoffs of status quo and revolution is intrinsic in politics. Several counter-intuitive findings emerge: (1) Better communication between citizens can lower the likelihood of revolution. In fact, when the punishment for failed protest is not too harsh and citizens' private knowledge is accurate, then further communication reduces incentives to revolt. (2) Increasing strategic uncertainty can increase the likelihood of revolution attempts, and even the likelihood of successful revolution. In particular, revolt may be more likely when citizens privately obtain information than when they receive information from a common media source. (3) Two dilemmas arise concerning the intensity and frequency of punishment (repression), and the frequency of protest. Punishment Dilemma 1: harsher punishments may increase the probability that punishment is materialized. That is, as the state increases the punishment for dissent, it might also have to punish more dissidents. It is only when the punishment is sufficiently harsh, that harsher punishment reduces the frequency of its application. Punishment Dilemma 1 leads to Punishment Dilemma 2: the frequencies of repression and protest can be positively or negatively correlated depending on the intensity of repression. My third essay, The Repression Puzzle, investigates the relationship between the intensity of grievances and the likelihood of repression. First, I make the observation that the occurrence of state repression is a puzzle. If repression is to succeed, dissidents should not rebel. If it is to fail, the state should concede in order to save the costs of unsuccessful repression. I then propose an explanation for the “repression puzzle” that hinges on information asymmetries between the state and dissidents about the costs of repression to the state, and hence the likelihood of its application by the state. I present a formal model that combines the insights of grievance-based and political process theories to investigate the consequences of this information asymmetry for the dissidents' contentious actions and for the relationship between the magnitude of grievances (formulated here as the extent of inequality) and the likelihood of repression. The main contribution of the paper is to show that this relationship is non-monotone. That is, as the magnitude of grievances increases, the likelihood of repression might decrease. I investigate the relationship between inequality and the likelihood of repression in all country-years from 1981 to 1999. To mitigate specification problem, I estimate the probability of repression using a generalized additive model with thin-plate splines (GAM-TPS). This technique allows for flexible relationship between inequality, the proxy for the costs of repression and revolutions (income per capita), and the likelihood of repression. The empirical evidence support my prediction that the relationship between the magnitude of grievances and the likelihood of repression is non-monotone.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
In this work, the relationship between diameter at breast height (d) and total height (h) of individual-tree was modeled with the aim to establish provisory height-diameter (h-d) equations for maritime pine (Pinus pinaster Ait.) stands in the Lomba ZIF, Northeast Portugal. Using data collected locally, several local and generalized h-d equations from the literature were tested and adaptations were also considered. Model fitting was conducted by using usual nonlinear least squares (nls) methods. The best local and generalized models selected, were also tested as mixed models applying a first-order conditional expectation (FOCE) approximation procedure and maximum likelihood methods to estimate fixed and random effects. For the calibration of the mixed models and in order to be consistent with the fitting procedure, the FOCE method was also used to test different sampling designs. The results showed that the local h-d equations with two parameters performed better than the analogous models with three parameters. However a unique set of parameter values for the local model can not be used to all maritime pine stands in Lomba ZIF and thus, a generalized model including covariates from the stand, in addition to d, was necessary to obtain an adequate predictive performance. No evident superiority of the generalized mixed model in comparison to the generalized model with nonlinear least squares parameters estimates was observed. On the other hand, in the case of the local model, the predictive performance greatly improved when random effects were included. The results showed that the mixed model based in the local h-d equation selected is a viable alternative for estimating h if variables from the stand are not available. Moreover, it was observed that it is possible to obtain an adequate calibrated response using only 2 to 5 additional h-d measurements in quantile (or random) trees from the distribution of d in the plot (stand). Balancing sampling effort, accuracy and straightforwardness in practical applications, the generalized model from nls fit is recommended. Examples of applications of the selected generalized equation to the forest management are presented, namely how to use it to complete missing information from forest inventory and also showing how such an equation can be incorporated in a stand-level decision support system that aims to optimize the forest management for the maximization of wood volume production in Lomba ZIF maritime pine stands.
Resumo:
Transcription activator-like effectors (TALEs) are virulence factors, produced by the bacterial plant-pathogen Xanthomonas, that function as gene activators inside plant cells. Although the contribution of individual TALEs to infectivity has been shown, the specific roles of most TALEs, and the overall TALE diversity in Xanthomonas spp. is not known. TALEs possess a highly repetitive DNA-binding domain, which is notoriously difficult to sequence. Here, we describe an improved method for characterizing TALE genes by the use of PacBio sequencing. We present 'AnnoTALE', a suite of applications for the analysis and annotation of TALE genes from Xanthomonas genomes, and for grouping similar TALEs into classes. Based on these classes, we propose a unified nomenclature for Xanthomonas TALEs that reveals similarities pointing to related functionalities. This new classification enables us to compare related TALEs and to identify base substitutions responsible for the evolution of TALE specificities. © 2016, Nature Publishing Group. All rights reserved.