925 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::PROBABILIDADE E ESTATISTICA::ESTATISTICA


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many studies on environmental ecosystems quality related to polycyclic aromatic hydrocarbons (PAH) have been carried out routinely due to their ubiquotus presence worldwide and to their potential toxicity after its biotransformation. PAH may be introduced into the environmet by natural and anthropogenic processes from direct runoff and discharges and indirect atmospheric deposition. Sources of naturally occurring PAHs include natural fires, natural oil seepage and recent biological or diagenetic processes. Anthropogenic sources of PAHs, acute or chronic, are combustion of organic matter (petroleum, coal, wood), waste and releases/spills of petroleum and derivatives (river runoff, sewage outfalls, maritime transport, pipelines). Besides the co-existence of multiples sources of PAH in the environmental samples, these compounds are subject to many processes that lead to geochemical fates (physical-chemical transformation, biodegradation and photo-oxidation), which leads to an alteration of their composition. All these facts make the identification of the hydrocarbons sources, if petrogenic, pyrolytic or natural, a challenge. One of the objectives of this study is to establish tools to identify the origin of hydrocarbons in environmental samples. PAH diagnostic ratios and PAH principal component analysis were tested on a critical area: Guanabara Bay sediments. Guanabara Bay is located in a complex urban area of Rio de Janeiro with a high anthropogenic influence, being an endpoint of chronic pollution from the Greater Rio and it was the scenario of an acute event of oil release in January 2000. It were quantified 38 compounds, parental and alkylated PAH, in 21 sediment samples collected in two surveys: 2000 and 2003. The PAH levels varied from 400 to 58439 ng g-1. Both tested techniques for origin identification of hydrocarbons have shown their applicability, being able to discriminate the PAH sources for the majority of the samples analysed. The bay sediments were separated into two big clusters: sediments with a clear pattern of petrogenic introduction of hydrocarbons (from intertidal area) and sediments with combustion characteristics (from subtidal region). Only a minority of the samples could not display a clear contribution of petrogenic or pyrolytic input. The diagnostic ratios that have exhibited high ability to distinguish combustion- and petroleum-derived PAH inputs for Guanabara Bay sediments were Phenanthrene+Anthracene/(Phenanthrene+Anthracene+C1Phenanthrene); Fluorantene/(Fluorantene+Pyrene); Σ (other 3-6 ring PAHs)/ Σ (5 alkylated PAH series). The PCA results prooved to be a useful tool for PAH source identification in the environment, corroborating the diagnostic indexes. In relation to the temporal evaluation carried out in this study, it was not verified significant changes on the class of predominant source of the samples. This result indicates that the hydrocarbons present in the Guanabara Bay sediments are mainly related to the long-term anthropogenic input and not directly related to acute events such as the oil spill of January 2000. This findings were similar to various international estuarine sites. Finally, this work had a complementary objective of evaluating the level of hydrocarbons exposure of the aquatic organisms of Guanabara Bay. It was a preliminary study in which a quantification of 12 individual biliar metabolites of PAH was performed in four demersal fish representing three different families. The analysed metabolites were 1-hydroxynaphtalene, 2-hidroxinaphtalene, 1hydroxyphenanthrene, 9-hydroxyphenanthrene, 2-hydroxyphenanthrene, 1hydroxypyrene, 3-hidroxibiphenil, 3- hydroxyphenanthrene, 1-hydroxychrysene, 9hydroxyfluorene, 4-hydroxyphenanthrene, 3-hydroxybenz(a)pyrene. The metabolites concentrations were found to be high, ranging from 13 to 177 µg g-1, however they were similar to worldwide regions under high anthropogenic input. Besides the metabolites established by the used protocol, it was possible to verified high concentrations of three other compounds not yet reported in the literature. They were related to pyrolytic PAH contribution to Guanabara Bay aquatic biota: 1-hydroxypyrine and 3-hydroxybenz(a)pyrine isomers

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dispersions composed of polyelectrolyte complexes based on chitosan and poly(methacrylic acid), PMAA, were obtained by the dropping method and template polymerization. The effect of molecular weight of PMAA and ionic strength on the formation of chitosan/poly(methacrylic acid), CS/PMAA, complexes was evaluated using the dropping method. The increase in molecular weight of PMAA inhibited the formation of insoluble complexes, while the increase in ionic strength first favored the formation of the complex followed by inhibiting it at higher concentrations. The polyelectrolyte complexation was strongly dependent on macromolecular dimensions, both in terms of molecular weight and of coil expansion/contraction driven by polyelectrolyte effect. The resultant particles from dropping method and template polymerization were characterized as having regions with different charge densities: chitosan predominating in the core and poly(methacrylic acid) at the surface, the particles being negatively charged, as a consequence. Albumin was adsorbed on templatepolymerized CS/PMAA complexes (after crosslinking with glutardialdehyde) and pH was controlled in order to obtain two conditions: (i) adsorption of positively charged albumin, and (ii) adsorption of albumin at its isoelectric point. Adsorption isotherms and zeta potential measurements showed that albumin adsorption was controlled by hydrogen bonding/van der Waals interactions and that brushlike structures may enhance adsorption of albumin on these particles

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The production of oil and gas is usually accompanied by the production of water, also known as produced water. Studies were conducted in platforms that discharge produced water in the Atlantic Ocean due to oil and gas production by Petrobras from 1996 to 2006 in the following basins: Santos (Brazilian south region), Campos (Brazilian southeast region) and Ceara (Brazilian northeast region). This study encompasses chemical composition, toxicological effects, discharge volumes, and produced water behavior after releasing in the ocean, including dispersion plumes modeling and monitoring data of the marine environment. The concentration medians for a sampling of 50 samples were: ammonia (70 mg L-1), boron (1.3 mg L1), iron (7.4 mg L-1), BTEX (4.6 mg L-1), PAH (0.53 mg L-1), TPH (28 mg L-1); phenols (1.3 mg L-1) and radioisotopes (0.15 Bq L-1 for 226Ra and 0.09 Bq L-1 for 228Ra). The concentrations of the organic and inorganic parameters observed for the Brazilian platforms were similar to the international reference data for the produced water in the North Sea and in other regions of the world. It was found significant differences in concentrations of the following parameters: BTEX (p<0.0001), phenols (p=0.0212), boron (p<0.0001), iron (p<0.0001) and toxicological response in sea urchin Lytechinus variegatus (p<0.0001) when considering two distinguished groups, platforms from southeast and northeast Region (PCR-1). Significant differences were not observed among the other parameters. In platforms with large gas production, the monoaromatic concentrations (BTEX from 15.8 to 21.6 mg L-1) and phenols (from 2 to 83 mg L-1) were higher than in oil plataforms (median concentrations of BTEX were 4.6 mg L-1 for n=53, and of phenols were 1.3 mg L-1 for n=46). It was also conducted a study about the influence of dispersion plumes of produced water in the vicinity of six platforms of oil and gas production (P-26, PPG-1, PCR-1, P-32, SS-06), and in a hypothetical critical scenario using the chemical characteristics of each effluent. Through this study, using CORMIX and CHEMMAP models for dispersion plumes simulation of the produced water discharges, it was possible to obtain the dilution dimension in the ocean after those discharges. The dispersion plumes of the produced water modelling in field vicinity showed dilutions of 700 to 900 times for the first 30-40 meters from the platform PCR-1 discharge point; 100 times for the platform P-32, with 30 meters of distance; 150 times for the platform P-26, with 40 meters of distance; 100 times for the platform PPG-1, with 130 meters of distance; 280 to 350 times for the platform SS-06, with 130 meters of distance, 100 times for the hypothetical critical scenario, with the 130 meters of distance. The dilutions continue in the far field, and with the results of the simulations, it was possible to verify that all the parameters presented concentrations bellow the maximum values established by Brazilian legislation for seawater (CONAMA 357/05 - Class 1), before the 500 meters distance of the discharge point. These results were in agreement with the field measurements. Although, in general results for the Brazilian produced water presented toxicological effects for marine organisms, it was verified that dilutions of 100 times were sufficient for not causing toxicological responses. Field monitoring data of the seawater around the Pargo, Pampo and PCR-1 platforms did not demonstrate toxicity in the seawater close to these platforms. The results of environmental monitoring in seawater and sediments proved that alterations were not detected for environmental quality in areas under direct influence of the oil production activities in the Campos and Ceara Basin, as according to results obtained in the dispersion plume modelling for the produced water discharge

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bifunctional catalysts based on zircon oxide modified by tungsten (W = 10, 15 and 20 %) and by molybdenum oxide (Mo= 10, 15 e 20 %) containg platinum (Pt = 1%) were prepared by the polymeric precursor method. For comparison, catalysts the tungsten base was also prepared by the impregnation method. After calcinations at 600, 700 and 800 ºC, the catalysts were characterized by X-ray diffraction, fourier-transform infrared spectroscopy, thermogravimetric and differential thermal analysis, nitrogen adsorption and scanning electron microscopy. The profile of metals reduction was determined by temperature programmed reduction. The synthesized catalysts were tested in n-heptane isomerization. X-ray diffractogram of the Pt/WOx-ZrO2 and Pt/MoOx-ZrO2 catalysts revealed the presence of tetragonal ZrO2 and platinum metallic phases in all calcined samples. Diffraction peaks due WO3 and ZrO2 monoclinic also were observed in some samples of the Pt/WOx-ZrO2 catalysts. In the Pt/MoOx-ZrO2 catalysts also were observed diffraction peaks due ZrO2 monoclinic and Zr(MoO4)2 oxide. These phases contained on Pt/WOx-ZrO2 and Pt/MoOx-ZrO2 catalysts varied in accordance with the W or Mo loading and in accordance with the calcination temperature. The infrared spectra showed absorption bands due O-W-O and W=O bonds in the Pt/WOx-ZrO2 catalysts and due O-Mo-O, Mo=O and Mo-O bonds in the Pt/MoOx-ZrO2 catalysts. Specific surface area for Pt/WOx-ZrO2 catalysts varied from 30-160 m2 g-1 and for the Pt/MoOx-ZrO2 catalysts varied from 10-120 m2 g-1. The metals loading (W or Mo) and the calcination temperature influence directly in the specific surface area of the samples. The reduction profile of Pt/WOx-ZrO2 catalysts showed two peaks at lower temperatures, which are attributed to platinum reduction. The reduction of WOx species was evidenced by two reduction peak at high temperatures. In the case of Pt/MoOx-ZrO2 catalysts, the reduction profile showed three reduction events, which are attributed to reduction of MoOx species deposited on the support and in some samples one of the peak is related to the reduction of Zr(MoO4)2 oxide. Pt/WOx-ZrO2 catalysts were active in the n-heptane isomerization with high selectivity to 3-methyl-hexane, 2,3- dimethyl-pentane, 2-methyl-hexane among other branched hydrocarbons. The Pt/MoOx-ZrO2 catalysts practically didn't present activity for the n-heptane isomerization, generating mainly products originating from the catalytic cracking

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Car Rental Salesman Problem (CaRS) is a variant of the classical Traveling Salesman Problem which was not described in the literature where a tour of visits can be decomposed into contiguous paths that may be performed in different rental cars. The aim is to determine the Hamiltonian cycle that results in a final minimum cost, considering the cost of the route added to the cost of an expected penalty paid for each exchange of vehicles on the route. This penalty is due to the return of the car dropped to the base. This paper introduces the general problem and illustrates some examples, also featuring some of its associated variants. An overview of the complexity of this combinatorial problem is also outlined, to justify their classification in the NPhard class. A database of instances for the problem is presented, describing the methodology of its constitution. The presented problem is also the subject of a study based on experimental algorithmic implementation of six metaheuristic solutions, representing adaptations of the best of state-of-the-art heuristic programming. New neighborhoods, construction procedures, search operators, evolutionary agents, cooperation by multi-pheromone are created for this problem. Furtermore, computational experiments and comparative performance tests are conducted on a sample of 60 instances of the created database, aiming to offer a algorithm with an efficient solution for this problem. These results will illustrate the best performance reached by the transgenetic algorithm in all instances of the dataset

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional applications of feature selection in areas such as data mining, machine learning and pattern recognition aim to improve the accuracy and to reduce the computational cost of the model. It is done through the removal of redundant, irrelevant or noisy data, finding a representative subset of data that reduces its dimensionality without loss of performance. With the development of research in ensemble of classifiers and the verification that this type of model has better performance than the individual models, if the base classifiers are diverse, comes a new field of application to the research of feature selection. In this new field, it is desired to find diverse subsets of features for the construction of base classifiers for the ensemble systems. This work proposes an approach that maximizes the diversity of the ensembles by selecting subsets of features using a model independent of the learning algorithm and with low computational cost. This is done using bio-inspired metaheuristics with evaluation filter-based criteria

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clustering data is a very important task in data mining, image processing and pattern recognition problems. One of the most popular clustering algorithms is the Fuzzy C-Means (FCM). This thesis proposes to implement a new way of calculating the cluster centers in the procedure of FCM algorithm which are called ckMeans, and in some variants of FCM, in particular, here we apply it for those variants that use other distances. The goal of this change is to reduce the number of iterations and processing time of these algorithms without affecting the quality of the partition, or even to improve the number of correct classifications in some cases. Also, we developed an algorithm based on ckMeans to manipulate interval data considering interval membership degrees. This algorithm allows the representation of data without converting interval data into punctual ones, as it happens to other extensions of FCM that deal with interval data. In order to validate the proposed methodologies it was made a comparison between a clustering for ckMeans, K-Means and FCM algorithms (since the algorithm proposed in this paper to calculate the centers is similar to the K-Means) considering three different distances. We used several known databases. In this case, the results of Interval ckMeans were compared with the results of other clustering algorithms when applied to an interval database with minimum and maximum temperature of the month for a given year, referring to 37 cities distributed across continents

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Model-oriented strategies have been used to facilitate products customization in the software products lines (SPL) context and to generate the source code of these derived products through variability management. Most of these strategies use an UML (Unified Modeling Language)-based model specification. Despite its wide application, the UML-based model specification has some limitations such as the fact that it is essentially graphic, presents deficiencies regarding the precise description of the system architecture semantic representation, and generates a large model, thus hampering the visualization and comprehension of the system elements. In contrast, architecture description languages (ADLs) provide graphic and textual support for the structural representation of architectural elements, their constraints and interactions. This thesis introduces ArchSPL-MDD, a model-driven strategy in which models are specified and configured by using the LightPL-ACME ADL. Such strategy is associated to a generic process with systematic activities that enable to automatically generate customized source code from the product model. ArchSPLMDD strategy integrates aspect-oriented software development (AOSD), modeldriven development (MDD) and SPL, thus enabling the explicit modeling as well as the modularization of variabilities and crosscutting concerns. The process is instantiated by the ArchSPL-MDD tool, which supports the specification of domain models (the focus of the development) in LightPL-ACME. The ArchSPL-MDD uses the Ginga Digital TV middleware as case study. In order to evaluate the efficiency, applicability, expressiveness, and complexity of the ArchSPL-MDD strategy, a controlled experiment was carried out in order to evaluate and compare the ArchSPL-MDD tool with the GingaForAll tool, which instantiates the process that is part of the GingaForAll UML-based strategy. Both tools were used for configuring the products of Ginga SPL and generating the product source code

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents ⇡SOD-M (Policy-based Service Oriented Development Methodology), a methodology for modeling reliable service-based applications using policies. It proposes a model driven method with: (i) a set of meta-models for representing non-functional constraints associated to service-based applications, starting from an use case model until a service composition model; (ii) a platform providing guidelines for expressing the composition and the policies; (iii) model-to-model and model-to-text transformation rules for semi-automatizing the implementation of reliable service-based applications; and (iv) an environment that implements these meta-models and rules, and enables the application of ⇡SOD-M. This thesis also presents a classification and nomenclature for non-functional requirements for developing service-oriented applications. Our approach is intended to add value to the development of service-oriented applications that have quality requirements needs. This work uses concepts from the service-oriented development, non-functional requirements design and model-driven delevopment areas to propose a solution that minimizes the problem of reliable service modeling. Some examples are developed as proof of concepts

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation we present some generalizations for the concept of distance by using more general value spaces, such as: fuzzy metrics, probabilistic metrics and generalized metrics. We show how such generalizations may be useful due to the possibility that the distance between two objects could carry more information about the objects than in the case where the distance is represented just by a real number. Also in this thesis we propose another generalization of distance which encompasses the notion of interval metric and generates a topology in a natural way. Several properties of this generalization are investigated, and its links with other existing generalizations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of interactive systems involves several professionals and the integration between them normally uses common artifacts, such as models, that drive the development process. In the model-driven development approach, the interaction model is an artifact that includes the most of the aspects related to what and how the user can do while he/she interacting with the system. Furthermore, the interactive model may be used to identify usability problems at design time. Therefore, the central problematic addressed by this thesis is twofold. In the first place, the interaction modeling, in a perspective that helps the designer to explicit to developer, who will implement the interface, the aspcts related to the interaction process. In the second place, the anticipated identification of usability problems, that aims to reduce the application final costs. To achieve these goals, this work presents (i) the ALaDIM language, that aims to help the designer on the conception, representation and validation of his interactive message models; (ii) the ALaDIM editor, which was built using the EMF (Eclipse Modeling Framework) and its standardized technologies by OMG (Object Management Group); and (iii) the ALaDIM inspection method, which allows the anticipated identification of usability problems using ALaDIM models. ALaDIM language and editor were respectively specified and implemented using the OMG standards and they can be used in MDA (Model Driven Architecture) activities. Beyond that, we evaluated both ALaDIM language and editor using a CDN (Cognitive Dimensions of Notations) analysis. Finally, this work reports an experiment that validated the ALaDIM inspection method

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions