834 resultados para test-process features
Resumo:
Purpose: Stereopsis is the perception of depth based on retinal disparity. Global stereopsis depends on the process of random dot stimuli and local stereopsis depends on contour perception. The aim of this study was to correlate 3 stereopsis tests: TNO®, StereoTA B®, and Fly Stereo Acuity Test® and to study the sensitivity and correlation between them, using TNO® as the gold standard. Other variables as near convergence point, vergences, symptoms and optical correction were correlated with the 3 tests. Materials and Methods: Forty-nine students from Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL), aged 18-26 years old were included. Results: The stereopsis mean (standard-deviation-SD) values in each test were: TNO® = 87.04” ±84.09”; FlyTest® = 38.18” ±34.59”; StereoTA B® = 124.89’’ ±137.38’’. About the coefficient of determination: TNO® and StereoTA B® with R2 = 0.6 e TNO® and FlyTest® with R2 =0.2. Pearson correlation coefficient shows a positive correlation between TNO® and StereoTA B® (r = 0.784 with α = 0.01). Phi coefficient shows a strong and positive association between TNO® and StereoTA B® (Φ = 0.848 with α = 0.01). In the ROC Curve, the StereoTA B® has an area under the curve bigger than the FlyTest® with a sensivity of 92.3% for 94.4% of specificity, so it means that the test is sensitive with a good discriminative power. Conclusion: We conclude that the use of Stereopsis tests to study global Stereopsis are an asset for clinical use. This type of test is more sensitive, revealing changes in Stereopsis when it is actually changed, unlike the test Stereopsis, which often indicates normal Stereopsis, camouflaging a Stereopsis change. We noted also that the StereoTA B ® is very sensitive and despite being a digital application, possessed good correlation with the TNO®.
Resumo:
Estereopsia define-se como a perceção de profundidade baseada na disparidade retiniana. A estereopsia global depende do processamento de estímulos de pontos aleatórios e a estereopsia local depende da perceção de contornos. O objetivo deste estudo é correlacionar três testes de estereopsia: TNO®, StereoTAB® e Fly Stereo Acuity Test® e verificar a sensibilidade e correlação entre eles, tendo o TNO® como gold standard. Incluíram-se 49 estudantes da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) entre os 18 e 26 anos. As variáveis ponto próximo de convergência (ppc), vergências, sintomatologia e correção ótica foram correlacionadas com os três testes. Os valores médios (desvios-padrão) de estereopsia foram: TNO® = 87,04’’ ±84,09’’; FlyTest® = 38,18’’ ±34,59’’; StereoTAB® = 124,89’’ ±137,38’’. Coeficiente de determinação: TNO® e StereoTAB® com R2=0,6 e TNO® e FlyTest® com R2=0,2. O coeficiente de correlação de Pearson mostra uma correlação positiva de entre o TNO® e o StereoTAB® (r=0,784 com α=0,01). O coeficiente de associação de Phi mostrou uma relação positiva forte entre o TNO® e StereoTAB® (Φ=0,848 com α=0,01). Na curva ROC, o StereoTAB® possui uma área sob a curva maior que o FlyTest®, apresentando valor de sensibilidade de 92,3% para uma especificidade de 94,4%, tornando-o num teste sensível e com bom poder discriminativo.
Resumo:
Previous research found personality test scores to be inflated on average among individuals who were motivated to present themselves in a desirable fashion in high stakes situations, such as during the employee selection process. One apparently effective way to reduce the undesirable test score inflation in such situations was to warn participants against faking. This research set out to investigate whether warning against faking would indeed affect personality test scores in the theoretically expected fashion. Contrary to expectations, the results did not support the hypothesized causal chain. Results across three studies show that while a warning may lower test scores in participants motivated to respond desirably (i.e., to fake), the effect of warning on test scores was not fully mediated by: a reduction in motivation to do well and self-reports of exaggerated responses in the personality test. Theoretical and practical implications are discussed.
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
Nonlinear thermo-mechanical properties of advanced polymers are crucial to accurate prediction of the process induced warpage and residual stress of electronics packages. The Fiber Bragg grating (FBG) sensor based method is advanced and implemented to determine temperature and time dependent nonlinear properties. The FBG sensor is embedded in the center of the cylindrical specimen, which deforms together with the specimen. The strains of the specimen at different loading conditions are monitored by the FBG sensor. Two main sources of the warpage are considered: curing induced warpage and coefficient of thermal expansion (CTE) mismatch induced warpage. The effective chemical shrinkage and the equilibrium modulus are needed for the curing induced warpage prediction. Considering various polymeric materials used in microelectronic packages, unique curing setups and procedures are developed for elastomers (extremely low modulus, medium viscosity, room temperature curing), underfill materials (medium modulus, low viscosity, high temperature curing), and epoxy molding compound (EMC: high modulus, high viscosity, high temperature pressure curing), most notably, (1) zero-constraint mold for elastomers; (2) a two-stage curing procedure for underfill materials and (3) an air-cylinder based novel setup for EMC. For the CTE mismatch induced warpage, the temperature dependent CTE and the comprehensive viscoelastic properties are measured. The cured cylindrical specimen with a FBG sensor embedded in the center is further used for viscoelastic property measurements. A uni-axial compressive loading is applied to the specimen to measure the time dependent Young’s modulus. The test is repeated from room temperature to the reflow temperature to capture the time-temperature dependent Young’s modulus. A separate high pressure system is developed for the bulk modulus measurement. The time temperature dependent bulk modulus is measured at the same temperatures as the Young’s modulus. The master curve of the Young’s modulus and bulk modulus of the EMC is created and a single set of the shift factors is determined from the time temperature superposition. The supplementary experiments are conducted to verify the validity of the assumptions associated with the linear viscoelasticity. The measured time-temperature dependent properties are further verified by a shadow moiré and Twyman/Green test.
Resumo:
There would appear to be varied approaches to the sales process practiced by SMEs in how they go about locating target customers, interfacing with prospects and new customers, presenting the benefits and features of their products and services, closing sales deals and building relationships, and an understanding of what the buyers needs are in the seller-buyer process. Recent research has revealed that while entrepreneurs and small business owners rely upon networking as an important source of sales, they lack marketing competencies, including personal selling skills and knowledge of what is involved in the sales process to close sales deals and build relationships. Small companies and start-ups with innovative products and services often find it difficult to persuade potential buyers of the merits of their offerings because, while the products and services may be excellent, they have not sufficiently well-developed selling skills necessary to persuade their target customers.
Resumo:
Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
One of the most significant research topics in computer vision is object detection. Most of the reported object detection results localise the detected object within a bounding box, but do not explicitly label the edge contours of the object. Since object contours provide a fundamental diagnostic of object shape, some researchers have initiated work on linear contour feature representations for object detection and localisation. However, linear contour feature-based localisation is highly dependent on the performance of linear contour detection within natural images, and this can be perturbed significantly by a cluttered background. In addition, the conventional approach to achieving rotation-invariant features is to rotate the feature receptive field to align with the local dominant orientation before computing the feature representation. Grid resampling after rotation adds extra computational cost and increases the total time consumption for computing the feature descriptor. Though it is not an expensive process if using current computers, it is appreciated that if each step of the implementation is faster to compute especially when the number of local features is increasing and the application is implemented on resource limited ”smart devices”, such as mobile phones, in real-time. Motivated by the above issues, a 2D object localisation system is proposed in this thesis that matches features of edge contour points, which is an alternative method that takes advantage of the shape information for object localisation. This is inspired by edge contour points comprising the basic components of shape contours. In addition, edge point detection is usually simpler to achieve than linear edge contour detection. Therefore, the proposed localization system could avoid the need for linear contour detection and reduce the pathological disruption from the image background. Moreover, since natural images usually comprise many more edge contour points than interest points (i.e. corner points), we also propose new methods to generate rotation-invariant local feature descriptors without pre-rotating the feature receptive field to improve the computational efficiency of the whole system. In detail, the 2D object localisation system is achieved by matching edge contour points features in a constrained search area based on the initial pose-estimate produced by a prior object detection process. The local feature descriptor obtains rotation invariance by making use of rotational symmetry of the hexagonal structure. Therefore, a set of local feature descriptors is proposed based on the hierarchically hexagonal grouping structure. Ultimately, the 2D object localisation system achieves a very promising performance based on matching the proposed features of edge contour points with the mean correct labelling rate of the edge contour points 0.8654 and the mean false labelling rate 0.0314 applied on the data from Amsterdam Library of Object Images (ALOI). Furthermore, the proposed descriptors are evaluated by comparing to the state-of-the-art descriptors and achieve competitive performances in terms of pose estimate with around half-pixel pose error.
Resumo:
Universities are institutions that generate and manipulate large amounts of data as a result of the multiple functions they perform, of the amount of involved professionals and students they attend. Information gathered from these data is used, for example, for operational activities and to support decision-making by managers. To assist managers in accomplishing their tasks, the Information Systems (IS) are presented as tools that offer features aiming to improve the performance of its users, assist with routine tasks and provide support to decision-making. The purpose of this research is to evaluate the influence of the users features and of the task in the success of IS. The study is of a descriptive-exploratory nature, therefore, the constructs used to define the conceptual model of the research are known and previously validated. However, individual features of users and of the task are IS success antecedents. In order to test the influence of these antecedents, it was developed a decision support IS that uses the Multicriteria Decision Aid Constructivist (MCDA-C) methodology with the participation and involvement of users. The sample consisted of managers and former managers of UTFPR Campus Pato Branco who work or have worked in teaching activities, research, extension and management. For data collection an experiment was conducted in the computer lab of the Campus Pato Branco in order to verify the hypotheses of the research. The experiment consisted of performing a distribution task of teaching positions between the academic departments using the IS developed. The task involved decision-making related to management activities. The data that fed the system used were real, from the Campus itself. A questionnaire was answered by the participants of the experiment in order to obtain data to verify the research hypotheses. The results obtained from the data analysis partially confirmed the influence of the individual features in IS success and fully confirmed the influence of task features. The data collected failed to support significant ratio between the individual features and the individual impact. For many of the participants the first contact with the IS was during the experiment, which indicates the lack of experience with the system. Regarding the success of IS, the data revealed that there is no significance in the relationship between Information Quality (IQ) and Individual Impact (II). It is noteworthy that the IS used in the experiment is to support decision-making and the information provided by this system are strictly quantitative, which may have caused some conflict in the analysis of the criteria involved in the decision-making process. This is because the criteria of teaching, research, extension and management are interconnected such that one reflects on another. Thus, the opinion of the managers does not depend exclusively on quantitative data, but also of knowledge and value judgment that each manager has about the problem to be solved.
Resumo:
In recent years the technological world has grown by incorporating billions of small sensing devices, collecting and sharing real-world information. As the number of such devices grows, it becomes increasingly difficult to manage all these new information sources. There is no uniform way to share, process and understand context information. In previous publications we discussed efficient ways to organize context information that is independent of structure and representation. However, our previous solution suffers from semantic sensitivity. In this paper we review semantic methods that can be used to minimize this issue, and propose an unsupervised semantic similarity solution that combines distributional profiles with public web services. Our solution was evaluated against Miller-Charles dataset, achieving a correlation of 0.6.
Resumo:
Open-cell metal foams show promise as an emerging novel material for heat exchanger applications. The high surface-area-to-volume ratio suggests increased compactness and decrease in weight of heat exchanger designs. However, the metal foam structure appears conducive to condensate retention, which would degenerate heat transfer performance. This research investigates the condensate retention behavior of aluminum open-cell metal foams through the use of static dip tests and geometrical classification via X-ray Micro-Computed Tomography. Aluminum open-cell metal foam samples of 5, 10, 20, and 40 pores per inch (PPI), all having a void fraction greater than 90%, were included in this investigation. In order to model the condensate retention behavior of metal foams, a clearer understanding of the geometry was required. After exploring the ideal geometries presented in the open literature, X-ray Micro-Computed Tomography was employed to classify the actual geometry of the metal foam samples. The images obtained were analyzed using specialized software from which geometric information including strut length and pore shapes were extracted. The results discerned a high variability in ligament length, as well as features supporting the ideal geometry known as the Weaire-Phelan unit cell. The static dip tests consisted of submerging the metal foam samples in a liquid, then allowing gravity-induced drainage until steady-state was reached and the liquid remaining in the metal foam sample was measured. Three different liquids, water, ethylene glycol, and 91% isopropyl alcohol, were employed. The behaviors of untreated samples were compared to samples subjected to a Beomite surface treatment process, and no significant differences in retention behavior were discovered. The dip test results revealed two distinct regions of condensate retention, each holding approximately half of the total liquid retained by the sample. As expected, condensate retention increased as the pores sizes decreased. A model based on surface tension was developed to predict the condensate retention in the metal foam samples and verified using a regular mesh. Applying the model to both the ideal and actual metal foam geometries showed good agreement with the dip test results in this study.
Resumo:
The present study was done in collaboration with J. Faria e Filhos company, a Madeira wine producer, and its main goal was to fully characterize three wines produced during 2014 harvest and identify possible improving points in the winemaking process. The winemaking process was followed during 4 weeks, being registered the amounts of grapes received, the fermentation temperatures, the time at which fermentation was stopped and evolution of must densities until the fortification time. The characterization of musts and wines was done in terms of density, total and volatile acidity, alcohol content, pH, total of polyphenol, organic acids composition, sugars concentration and the volatile profile. Also, it was developed and validated an analytical methodology to quantify the volatile fatty acids, namely using SPME-GC-MS. Briefly, the following key features were obtained for the latter methodology: linearity (R2=0.999) e high sensitivity (LOD =0.026-0.068 mg/L), suitable precision (repeatability and reproducibility lower than 8,5%) and good recoveries (103,11-119,46%). The results reveal that fermentation temperatures should be controlled in a more strictly manner, in order to ensure a better balance in proportion of some volatile compounds, namely the esters and higher alcohols and to minimize the concentration of some volatiles, namely hexanoic, octanoic and decanoic acids, that when above their odours threshold are not positive for the wine aroma. Also, regarding the moment to stop the fermentation, it was verified that it can be introduced changes which can also be benefit to guarantee the tipicity of Madeira wine bouquet.
Resumo:
O período de germinação e o estabelecimento de plântulas é um dos fatores mais importantes para a sobrevivência das espécies, principalmente nos locais em que a disponibilidade de água é limitada, como na região da Caatinga. Neste sentido, o objetivo deste trabalho foi avaliar o efeito do estresse hídrico sobre a germinação de sementes de Piptadenia moniliformis Benth. Foram utilizados três lotes (L1, L2 e L3), correspondentes aos anos de produção de 2006, 2007 e 2008, respectivamente. Antes do teste de germinação, as sementes foram escarificadas com ácido sulfúrico concentrado durante 30 minutos. Para induzir o deficit hídrico, foi utilizado o polietileno glicol (PEG 6000), nos seguintes potenciais osmóticos: - 0,3; -0,6; -0,9, -1,2 e -1,5 MPa e a água (0 MPa) sob as temperaturas de 25 e 30ºC. As características avaliadas foram: porcentagem de germinação e de plântulas normais, índice de velocidade de germinação e massa seca de plântulas. O processo germinativo de sementes de Piptadenia moniliformis Benth. é comprometido a partir de potenciais hídricos inferiores a -0,6 MPa a 25 e 30 °C; potenciais hídricos iguais ou inferiores a -1,2 MPa inibem a formação de plântulas normais nas duas temperaturas; a tolerância ao estresse hídrico simulado com PEG 6000 é variável entre lotes de sementes e temperaturas de germinação.
Resumo:
The textile industry generates a large volume of high organic effluent loading whoseintense color arises from residual dyes. Due to the environmental implications caused by this category of contaminant there is a permanent search for methods to remove these compounds from industrial waste waters. The adsorption alternative is one of the most efficient ways for such a purpose of sequestering/remediation and the use of inexpensive materials such as agricultural residues (e.g., sugarcane bagasse) and cotton dust waste (CDW) from weaving in their natural or chemically modified forms. The inclusion of quaternary amino groups (DEAE+) and methylcarboxylic (CM-) in the CDW cellulosic structure generates an ion exchange capacity in these formerly inert matrix and, consequently, consolidates its ability for electrovalent adsorption of residual textile dyes. The obtained ionic matrices were evaluated for pHpcz, the retention efficiency for various textile dyes in different experimental conditions, such as initial concentration , temperature, contact time in order to determine the kinetic and thermodynamic parameters of adsorption in batch, turning comprehensive how does occur the process, then understood from the respective isotherms. It was observed a change in the pHpcz for CM--CDW (6.07) and DEAE+-CDW (9.66) as compared to the native CDW (6.46), confirming changes in the total surface charge. The ionized matrices were effective for removing all evaluated pure or residual textile dyes under various tested experimental conditions. The kinetics of the adsorption process data had best fitted to the model a pseudosecond order and an intraparticle diffusion model suggested that the process takes place in more than one step. The time required for the system to reach equilibrium varied according to the initial concentration of dye, being faster in diluted solutions. The isotherm model of Langmuir was the best fit to the experimental data. The maximum adsorption capacity varied differently for each tested dye and it is closely related to the interaction adsorbent/adsorbate and dye chemical structure. Few dyes obtained a linear variation of the balance ka constant due to the inversion of temperature and might have influence form their thermodynamic behavior. Dyes that could be evaluated such as BR 18: 1 and AzL, showed features of an endothermic adsorption process (ΔH° positive) and the dye VmL presented exothermic process characteristics (ΔH° negative). ΔG° values suggested that adsorption occurred spontaneously, except for the BY 28 dye, and the values of ΔH° indicated that adsorption occurred by a chemisorption process. The reduction of 31 to 51% in the biodegradability of the matrix after the dye adsorption means that they must go through a cleaning process before being discarded or recycled, and the regeneration test indicates that matrices can be reused up to five times without loss of performance. The DEAE+-CDW matrix was efficient for the removal of color from a real textile effluent reaching an UV-Visible spectral area decrease of 93% when applied in a proportion of 15 g ion exchanger matrix L-1 of colored wastewater, even in the case of the parallel presence of 50 g L-1 of mordant salts in the waste water. The wide range of colored matter removal by the synthesized matrices varied from 40.27 to 98.65 mg g-1 of ionized matrix, obviously depending in each particular chemical structure of the dye upon adsorption.