968 resultados para Dynamically changing electrode processes
Resumo:
The relation of automatic auditory discrimination, measured with MMN, with the type of stimuli has not been well established in the literature, despite its importance as an electrophysiological measure of central sound representation. In this study, MMN response was elicited by pure-tone and speech binaurally passive auditory oddball paradigm in a group of 8 normal young adult subjects at the same intensity level (75 dB SPL). The frequency difference in pure-tone oddball was 100 Hz (standard = 1 000 Hz; deviant = 1 100 Hz; same duration = 100 ms), in speech oddball (standard /ba/; deviant /pa/; same duration = 175 ms) the Portuguese phonemes are both plosive bi-labial in order to maintain a narrow frequency band. Differences were found across electrode location between speech and pure-tone stimuli. Larger MMN amplitude, duration and higher latency to speech were verified compared to pure-tone in Cz and Fz as well as significance differences in latency and amplitude between mastoids. Results suggest that speech may be processed differently than non-speech; also it may occur in a later stage due to overlapping processes since more neural resources are required to speech processing.
Resumo:
This study focused on the development of a sensitive enzymatic biosensor for the determination of pirimicarb pesticide based on the immobilization of laccase on composite carbon paste electrodes. Multi- walled carbon nanotubes(MWCNTs)paste electrode modified by dispersion of laccase(3%,w/w) within the optimum composite matrix(60:40%,w/w,MWCNTs and paraffin binder)showed the best performance, with excellent electron transfer kinetic and catalytic effects related to the redox process of the substrate4- aminophenol. No metal or anti-interference membrane was added. Based on the inhibition of laccase activity, pirimicarb can be determined in the range 9.90 ×10- 7 to 1.15 ×10- 5 molL 1 using 4- aminophenol as substrate at the optimum pH of 5.0, with acceptable repeatability and reproducibility (relative standard deviations lower than 5%).The limit of detection obtained was 1.8 × 10-7 molL 1 (0.04 mgkg 1 on a fresh weight vegetable basis).The high activity and catalytic properties of the laccase- based biosensor are retained during ca. one month. The optimized electroanalytical protocol coupled to the QuEChERS methodology were applied to tomato and lettuce samples spiked at three levels; recoveries ranging from 91.0±0.1% to 101.0 ± 0.3% were attained. No significant effects in the pirimicarb electro- analysis were observed by the presence of pro-vitamin A, vitamins B1 and C,and glucose in the vegetable extracts. The proposed biosensor- based pesticide residue methodology fulfills all requisites to be used in implementation of food safety programs.
Resumo:
A novel enzymatic biosensor for carbamate pesticides detection was developed through the direct immobilization of Trametes versicolor laccase on graphene doped carbon paste electrode functionalized with Prussianblue films (LACC/PB/GPE). Graphene was prepared by graphite sonication-assisted exfoliation and characterized by transmission electron microscopy and X-ray photoelectron spectro- scopy. The Prussian blue film electrodeposited onto graphene doped carbon paste electrode allowed considerable reduction of the charge transfer resistance and of the capacitance of the device.The combined effects of pH, enzyme concentration and incubation time on biosensor response were optimized using a 23 full-factorial statistical design and response surface methodology. Based on the inhibition of laccase activity and using 4-aminophenol as redox mediator at pH 5.0,LACC/PB/GPE exhibited suitable characteristics in terms of sensitivity, intra-and inter-day repeatability (1.8–3.8% RSD), reproducibility (4.1 and 6.3%RSD),selectivity(13.2% bias at the higher interference: substrate ratios tested),accuracy and stability(ca. twenty days)for quantification of five carbamates widely applied on tomato and potato crops.The attained detection limits ranged between 5.2×10−9 mol L−1(0.002 mg kg−1 w/w for ziram)and 1.0×10−7 mol L−1 (0.022 mg kg−1 w/w for carbofuran).Recovery values for the two tested spiking levels ranged from 90.2±0.1%(carbofuran)to 101.1±0.3% (ziram) for tomato and from 91.0±0.1%(formetanate)to 100.8±0.1%(ziram)for potato samples.The proposed methodology is appropriate to enable testing pesticide levels in food samples to fit with regulations and food inspections.
Resumo:
Embedded systems are increasingly complex and dynamic, imposing progressively higher developing time and costs. Tuning a particular system for deployment is thus becoming more demanding. Furthermore when considering systems which have to adapt themselves to evolving requirements and changing service requests. In this perspective, run-time monitoring of the system behaviour becomes an important requirement, allowing to dynamically capturing the actual scheduling progress and resource utilization. For this to succeed, operating systems need to expose their internal behaviour and state, making it available to external applications, and a runtime monitoring mechanism must be available. However, such mechanism can impose a burden in the system itself if not wisely used. In this paper we explore this problem and propose a framework, which is intended to provide this run-time mechanism whilst achieving code separation, run-time efficiency and flexibility for the final developer.
Resumo:
The development of new products or processes involves the creation, re-creation and integration of conceptual models from the related scientific and technical domains. Particularly, in the context of collaborative networks of organisations (CNO) (e.g. a multi-partner, international project) such developments can be seriously hindered by conceptual misunderstandings and misalignments, resulting from participants with different backgrounds or organisational cultures, for example. The research described in this article addresses this problem by proposing a method and the tools to support the collaborative development of shared conceptualisations in the context of a collaborative network of organisations. The theoretical model is based on a socio-semantic perspective, while the method is inspired by the conceptual integration theory from the cognitive semantics field. The modelling environment is built upon a semantic wiki platform. The majority of the article is devoted to developing an informal ontology in the context of a European R&D project, studied using action research. The case study results validated the logical structure of the method and showed the utility of the method.
Resumo:
An electrochemical sensor has been developed for the determination of the herbicide bentazone, based on a GC electrode modified by a combination of multiwalled carbon nanotubes (MWCNT) with b-cyclodextrin (b-CD) incorporated in a polyaniline film. The results indicate that the b-CD/MWCNT modified GC electrode exhibits efficient electrocatalytic oxidation of bentazone with high sensitivity and stability. A cyclic voltammetric method to determine bentazone in phosphate buffer solution at pH 6.0, was developed, without any previous extraction, clean-up, or derivatization steps, in the range of 10–80 mmolL 1, with a detection limit of 1.6 mmolL 1 in water. The results were compared with those obtained by an established HPLC technique. No statistically significant differences being found between both methods.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Tese de doutoramento em Engenharia do Ambiente, especialidade em Sistemas Sociais
Resumo:
It has been shown that in reality at least two general scenarios of data structuring are possible: (a) a self-similar (SS) scenario when the measured data form an SS structure and (b) a quasi-periodic (QP) scenario when the repeated (strongly correlated) data form random sequences that are almost periodic with respect to each other. In the second case it becomes possible to describe their behavior and express a part of their randomness quantitatively in terms of the deterministic amplitude–frequency response belonging to the generalized Prony spectrum. This possibility allows us to re-examine the conventional concept of measurements and opens a new way for the description of a wide set of different data. In particular, it concerns different complex systems when the ‘best-fit’ model pretending to be the description of the data measured is absent but the barest necessity of description of these data in terms of the reduced number of quantitative parameters exists. The possibilities of the proposed approach and detection algorithm of the QP processes were demonstrated on actual data: spectroscopic data recorded for pure water and acoustic data for a test hole. The suggested methodology allows revising the accepted classification of different incommensurable and self-affine spatial structures and finding accurate interpretation of the generalized Prony spectroscopy that includes the Fourier spectroscopy as a partial case.
Resumo:
The paper will present the central discourse of the knowledge-based society. Already in the 1960s the debate of the industrial society already raised the question whether there can be considered a paradigm shift towards a knowledge-based society. Some prominent authors already foreseen ‘knowledge’ as the main indicator in order to displace ‘labour’ and ‘capital’ as the main driving forces of the capitalistic development. Today on the political level and also in many scientific disciplines the assumption that we are already living in a knowledge-based society seems obvious. Although we still do not have a theory of the knowledge-based society and there still exist a methodological gap about the empirical indicators, the vision of a knowledge-based society determines at least the perception of the Western societies. In a first step the author will pinpoint the assumptions about the knowledge-based society on three levels: on the societal, on the organisational and on the individual level. These assumptions are relied on the following topics: a) The role of the information and communication technologies; b) The dynamic development of globalisation as an ‘evolutionary’ process; c) The increasing importance of knowledge management within organisations; d) The changing role of the state within the economic processes. Not only the differentiation between the levels but also the revision of the assumptions of a knowledge-based society will show that the ‘topics raised in the debates’ cannot be considered as the results of a profound societal paradigm shift. However what seems very impressive is the normative and virtual shift towards a concept of modernity, which strongly focuses on the role of technology as a driving force as well as on the global economic markets, which has to be accepted. Therefore – according to the official debate - the successful adaptation of these processes seems the only way to meet the knowledge-based society. Analysing the societal changes on the three levels, the label ‘knowledge-based society’ can be seen critically. Therefore the main question of Theodor W. Adorno during the 16th Congress of Sociology in 1968 did not loose its actuality. Facing the societal changes he asked whether we are still living in the industrial society or already in a post-industrial state. Thinking about the knowledge-based society according to these two options, this exercise would enrich the whole debate in terms of social inequality, political, economic exclusion processes and at least the power relationship between social groups.
Resumo:
The paper examines change processes und future perspectives in the knowledge society. It presents the clothing and textile industry as an example for a transforming industry in a global economy. The paper reviews existing future studies, which have surveyed change processes and future developments in the clothing and textile industry. Main goals of the review are the identification of changes in work and the description of the restructuring of global value chains within the clothing and textile sector. The paper also highlights major current trends, drivers of change and future prospects in this sector.
Resumo:
This paper explores the management structure of the team-based organization. First it provides a theoretical model of structures and processes of work teams. The structure determines the team’s responsibilities in terms of authority and expertise about specific regulation tasks. The responsiveness of teams to these responsibilities are the processes of teamwork, in terms of three dimensions, indicating to what extent teams indeed use the space provided to them. The research question that this paper addresses is to what extent the position of responsibilities in the team-based organization affect team responsiveness. This is done by two hypotheses. First, the effect of the so-called proximity of regulation tasks is tested. It is expected that the responsibility for tasks positioned higher in the organization (i.e. further from the team) generally has a negative effect on team responsiveness, whereas tasks positioned lower in the organization (i.e. closer to the team) will have a positive effect on the way in which teams respond. Second, the relationship between the number of tasks for which the team is responsible with team responsiveness is tested. Theory suggests that teams being responsible for a larger number of tasks perform better, i.e. show higher responsiveness. These hypotheses are tested by a study of 109 production teams in the automotive industry. The results show that, as the theory predicts, increasing numbers of responsibilities have positive effects on team responsiveness. However, the delegation of expertise to teams seems to be the most important predictor of responsiveness. Also, not all regulation tasks show to have effects on team responsiveness. Most tasks do not show to have any significant effect at all. A number of tasks affects team responsiveness positively, when their responsibility is positioned lower in the organization, but also a number of tasks affects team responsiveness positively when located higher in the organization, i.e. further from the teams in the production. The results indicate that more attention can be paid to the distribution of responsibilities, in particular expertise, to teams. Indeed delegating more expertise improve team responsiveness, however some tasks might be located better at higher organizational levels, indicating that there are limitations to what responsibilities teams can handle.
Resumo:
This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.
Resumo:
Dye-sensitized solar cell (DSSC) is a promising solution to global energy and environmental problems because of its clean, low-cost, high efficiency, good durability, and easy fabrication. However, enhancing the efficiency of the DSSC still is an important issue. Here we devise a bifacial DSSC based on a transparent polyaniline (PANI) counter electrode (CE). Owing to the sunlight irradiation simultaneously from the front and the rear sides, more dye molecules are excited and more carriers are generated, which results in the enhancement of short-circuit current density and therefore overall conversion efficiency. The photoelectric properties of PANI can be improved by modifying with 4-aminothiophenol (4-ATP). The bifacial DSSC with 4-ATP/PANI CE achieves a light-to-electric energy conversion efficiency of 8.35%, which is increased by ,24.6% compared to the DSSC irradiated from the front only. This new concept along with promising results provides a new approach for enhancing the photovoltaic performances of solar cells.
Resumo:
Typically common embedded systems are designed with high resource constraints. Static designs are often chosen to address very specific use cases. On contrast, a dynamic design must be used if the system must supply a real-time service where the input may contain factors of indeterminism. Thus, adding new functionality on these systems is often accomplished by higher development time, tests and costs, since new functionality push the system complexity and dynamics to a higher level. Usually, these systems have to adapt themselves to evolving requirements and changing service requests. In this perspective, run-time monitoring of the system behaviour becomes an important requirement, allowing to dynamically capturing the actual scheduling progress and resource utilization. For this to succeed, operating systems need to expose their internal behaviour and state, making it available to the external applications, usually using a run-time monitoring mechanism. However, such mechanism can impose a burden in the system itself if not wisely used. In this paper we explore this problem and propose a framework, which is intended to provide this run-time mechanism whilst achieving code separation, run-time efficiency and flexibility for the final developer.