11 resultados para falling initial distributions of deuterio-isomers
em Instituto Politécnico do Porto, Portugal
Resumo:
A Box–Behnken factorial design coupled with surface response methodology was used to evaluate the effects of temperature, pH and initial concentration in the Cu(II) sorption process onto the marine macroalgae Ascophyllum nodosum. The effect of the operating variables on metal uptake capacitywas studied in a batch system and a mathematical model showing the influence of each variable and their interactions was obtained. Study ranges were 10–40ºC for temperature, 3.0–5.0 for pH and 50–150mgL−1 for initial Cu(II) concentration. Within these ranges, the biosorption capacity is slightly dependent on temperature but markedly increases with pH and initial concentration of Cu(II). The uptake capacities predicted by the model are in good agreement with the experimental values. Maximum biosorption capacity of Cu(II) by A. nodosum is 70mgg−1 and corresponds to the following values of those variables: temperature = 40ºC, pH= 5.0 and initial Cu(II) concentration = 150mgL−1.
Resumo:
Zero valent iron (ZVI) has been extensively used as a reactive medium for the reduction of Cr(VI) to Cr(III) in reactive permeable barriers. The kinetic rate depends strongly on the superficial oxidation of the iron particles used and the preliminary washing of ZVI increases the rate. The reaction has been primarily modelled using a pseudo-first-order kinetics which is inappropriate for a heterogeneous reaction. We assumed a shrinking particle type model where the kinetic rate is proportional to the available iron surface area, to the initial volume of solution and to the chromium concentration raised to a power ˛ which is the order of the chemical reaction occurring at surface. We assumed α= 2/3 based on the likeness to the shrinking particle models with spherical symmetry. Kinetics studies were performed in order to evaluate the suitability of this approach. The influence of the following parameters was experimentally studied: initial available surface area, chromium concentration, temperature and pH. The assumed order for the reaction was confirmed. In addition, the rate constant was calculated from data obtained in different operating conditions. Digital pictures of iron balls were periodically taken and the image treatment allowed for establishing the time evolution of their size distribution.
Resumo:
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.
Resumo:
Catastrophic events, such as wars and terrorist attacks, tornadoes and hurricanes, earthquakes, tsunamis, floods and landslides, are always accompanied by a large number of casualties. The size distribution of these casualties has separately been shown to follow approximate power law (PL) distributions. In this paper, we analyze the statistical distributions of the number of victims of catastrophic phenomena, in particular, terrorism, and find double PL behavior. This means that the data sets are better approximated by two PLs instead of a single one. We plot the PL parameters, corresponding to several events, and observe an interesting pattern in the charts, where the lines that connect each pair of points defining the double PLs are almost parallel to each other. A complementary data analysis is performed by means of the computation of the entropy. The results reveal relationships hidden in the data that may trigger a future comprehensive explanation of this type of phenomena.
Resumo:
Gallinaceous feathers are an abundant solid waste from the poultry processing industries, which poses disposal problems. A kinetic study dealing with the adsorption process of wool reactive dye, Yellow Lanasol 4G (CI Reactive Yellow 39), on gallinaceous (Gallus gallus, Cobb 500) feathers was carried out. The main research goals of this work were to evaluate the viability of using this waste as adsorbent and to study the kinetics of the adsorption process, using a synthetic effluent. The characterization of feathers was performed by scanning electron microscopy, mercury porosimetry and B.E.T. method. The study of several factors (stirring, particles size, initial dye concentration and temperature) showed their influence over the adsorption process. An adapted version of the Schumckler and Goldstein´s unreacted core model fitted the experimental data. The best fit was obtained when the rate-limiting step was the diffusion through the reacted layer, which was expected considering the size of the dyestuff molecules. The comparison with the granular activated carbon (GAC) Sutcliffe GAC 10-30 indicate that in spite of the high adsorption capacities shown by feathers the GAC presented higher values, the values obtained were respectively 150 and 219 mg g-1, for an initial concentration of 500 mg L-1. The results obtained might open future perspectives both to the valorization of feathers and to the economical treatment of textile wastewaters.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results.
Resumo:
Electric power networks, namely distribution networks, have been suffering several changes during the last years due to changes in the power systems operation, towards the implementation of smart grids. Several approaches to the operation of the resources have been introduced, as the case of demand response, making use of the new capabilities of the smart grids. In the initial levels of the smart grids implementation reduced amounts of data are generated, namely consumption data. The methodology proposed in the present paper makes use of demand response consumers’ performance evaluation methods to determine the expected consumption for a given consumer. Then, potential commercial losses are identified using monthly historic consumption data. Real consumption data is used in the case study to demonstrate the application of the proposed method.
Resumo:
As ligações adesivas têm sido utilizadas em áreas como a indústria aeroespacial, aeronáutica, de defesa, automóvel, da construção civil e das madeiras. As juntas adesivas têm vindo a substituir métodos como a soldadura, e ligações parafusadas e rebitadas, devido à facilidade de fabricação, maiores cadências de produção, menores custos, facilidade em unir materiais diferentes, melhor resistência à fadiga, entre outras razões. Como tal, também se utilizam reparações adesivas para restituição da resistência de estruturas danificadas, cujas técnicas mais comuns são a sobreposição simples, sobreposição dupla e remendo embebido. As reparações por remendo embebido, que são as mais eficientes, consistem na realização de um furo cónico na zona danificada e colagem de um remendo com a forma complementar do furo, de tal forma que não é alterada a forma inicial do componente. Neste trabalho pretende-se estudar experimental e numericamente reparações adesivas por remendo embebido, nomeadamente o efeito da utilização de reforços exteriores (em um ou nos dois lados da estrutura), para diferentes ângulos de inclinação. Foi considerado um adesivo dúctil (Araldite® 2015) e outro frágil (Araldite® AV138), o que permitiu abranger processos de rotura bastante distintos. O estudo experimental é acompanhado por outro numérico no software ABAQUS®, usando modelos coesivos para a previsão numérica da resistência das reparações. O trabalho numérico permitiu o estudo das distribuições de tensões, o que possibilitou a análise detalhada dos resultados obtidos. Foi também realizado um estudo numérico de otimização das reparações por alteração da espessura dos reforços e utilização de chanfro nas extremidades dos mesmos. Nos resultados obtidos, constatou-se a adequabilidade do método numérico na previsão fiável da resistência, e também que a utilização dos reforços aumenta consideravelmente o rendimento das reparações (até 530 % e 340 % para os adesivos Araldite® 2015 e AV138, respetivamente), o que poderá justificar a sua utilização em aplicações industriais em que a perturbação aerodinâmica causada por esta alteração não seja relevante.
Resumo:
6th Graduate Student Symposium on Molecular Imprinting
Resumo:
This paper studies the statistical distributions of worldwide earthquakes from year 1963 up to year 2012. A Cartesian grid, dividing Earth into geographic regions, is considered. Entropy and the Jensen–Shannon divergence are used to analyze and compare real-world data. Hierarchical clustering and multi-dimensional scaling techniques are adopted for data visualization. Entropy-based indices have the advantage of leading to a single parameter expressing the relationships between the seismic data. Classical and generalized (fractional) entropy and Jensen–Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand earthquake distributions.