975 resultados para Classification--History--Sources
Resumo:
Purpose: To describe and compare the content of instruments that assess environmental factors using the International Classification of Functioning, Disability and Health (ICF). Methods: A systematic search of PubMed, CINAHL and PEDro databases was conducted using a pre-determined search strategy. The identified instruments were screened independently by two investigators, and meaningful concepts were linked to the most precise ICF category according to published linking rules. Results: Six instruments were included, containing 526 meaningful concepts. Instruments had between 20% and 98% of items linked to categories in Chapter 1. The highest percentage of items from one instrument linked to categories in Chapters 2–5 varied between 9% and 50%. The presence or absence of environmental factors in a specific context is assessed in 3 instruments, while the other 3 assess the intensity of the impact of environmental factors. Discussion: Instruments differ in their content, type of assessment, and have several items linked to the same ICF category. Most instruments primarily assess products and technology (Chapter 1), highlighting the need to deepen the discussion on the theory that supports the measurement of environmental factors. This discussion should be thorough and lead to the development of methodologies and new tools that capture the underlying concepts of the ICF.
Resumo:
OBJECTIVE: To develop a Charlson-like comorbidity index based on clinical conditions and weights of the original Charlson comorbidity index. METHODS: Clinical conditions and weights were adapted from the International Classification of Diseases, 10th revision and applied to a single hospital admission diagnosis. The study included 3,733 patients over 18 years of age who were admitted to a public general hospital in the city of Rio de Janeiro, southeast Brazil, between Jan 2001 and Jan 2003. The index distribution was analyzed by gender, type of admission, blood transfusion, intensive care unit admission, age and length of hospital stay. Two logistic regression models were developed to predict in-hospital mortality including: a) the aforementioned variables and the risk-adjustment index (full model); and b) the risk-adjustment index and patient's age (reduced model). RESULTS: Of all patients analyzed, 22.3% had risk scores >1, and their mortality rate was 4.5% (66.0% of them had scores >1). Except for gender and type of admission, all variables were retained in the logistic regression. The models including the developed risk index had an area under the receiver operating characteristic curve of 0.86 (full model), and 0.76 (reduced model). Each unit increase in the risk score was associated with nearly 50% increase in the odds of in-hospital death. CONCLUSIONS: The risk index developed was able to effectively discriminate the odds of in-hospital death which can be useful when limited information is available from hospital databases.
Resumo:
Dissertação de Mestrado em Psicologia da Educação, especialidade em Contextos Comunitários.
Resumo:
In the hustle and bustle of daily life, how often do we stop to pay attention to the tiny details around us, some of them right beneath our feet? Such is the case of interesting decorative patterns that can be found in squares and sidewalks beautified by the traditional Portuguese pavement. Its most common colors are the black and the white of the basalt and the limestone used; the result is a large variety and richness in patterns. No doubt, it is worth devoting some of our time enjoying the lovely Portuguese pavement, a true worldwide attraction. The interesting patterns found on the Azorean handicrafts are as fascinating and substantial from the cultural point of view. Patterns existing in the sidewalks and crafts can be studied from the mathematical point of view, thus allowing a thorough and rigorous cataloguing of such heritage. The mathematical classification is based on the concept of symmetry, a unifying principle of geometry. Symmetry is a unique tool for helping us relate things that at first glance may appear to have no common ground at all. By interlacing different fields of endeavor, the mathematical approach to sidewalks and crafts is particularly interesting, and an excellent source of inspiration for the development of highly motivated recreational activities. This text is an invitation to visit the nine islands of the Azores and to identify a wide range of patterns, namely rosettes and friezes, by getting to know different arts and crafts and sidewalks.
Resumo:
Clinical and environmental samples from Portugal were screened for the presence of Aspergillus and the distributions of the species complexes were determined in order to understand how their distributions differ based on their source. Fifty-seven Aspergillus isolates from clinical samples were collected from 10 health institutions. Six species complexes were detected by internal transcribed spacer sequencing; Fumigati, Flavi, and Nigri were found most frequently (50.9%, 21.0%, and 15.8%, respectively). β-tubulin and calmodulin sequencing resulted in seven cryptic species (A. awamorii, A. brasiliensis, A. fructus, A. lentulus, A. sydowii, A. tubingensis, Emericella echinulata) being identified among the 57 isolates. Thirty-nine isolates of Aspergillus were recovered from beach sand and poultry farms, 31 from swine farms, and 80 from hospital environments, for a total 189 isolates. Eleven species complexes were found in these 189 isolates, and those belonging to the Versicolores species complex were found most frequently (23.8%). There was a significant association between the different environmental sources and distribution of the species complexes; the hospital environment had greater variability of species complexes than other environmental locations. A high prevalence of cryptic species within the Circumdati complex was detected in several environments; from the isolates analyzed, at least four cryptic species were identified, most of them growing at 37ºC. Because Aspergillus species complexes have different susceptibilities to antifungals, knowing the species-complex epidemiology for each setting, as well as the identification of cryptic species among the collected clinical isolates, is important. This may allow preventive and corrective measures to be taken, which may result in decreased exposure to those organisms and a better prognosis.
Resumo:
The legacy of nineteenth century social theory followed a “nationalist” model of society, assuming that analysis of social realities depends upon national boundaries, taking the nation-state as the primary unit of analysis, and developing the concept of methodological nationalism. This perspective regarded the nation-state as the natural - and even necessary - form of society in modernity. Thus, the constitution of large cities, at the end of the 19th century, through the intense flows of immigrants coming from diverse political and linguistic communities posed an enormous challenge to all social research. One of the most significant studies responding to this set of issues was The Immigrant Press and its Control, by Robert E. Park, one of the most prominent American sociologists of the first half of the 20th century. The Immigrant Press and its Control was part of a larger project entitled Americanization Studies: The Acculturation of Immigrant Group into American Society, funded by the Carnagie Corporation following World War I, taking as its goal to study the so-called “Americanization methods” during the 1920s. This paper revisits that particular work by Park to reveal how his detailed analysis of the role of the immigrant press overcame the limitations of methodological nationalism. By granting importance to language as a tool uniting each community and by showing how the strength of foreign languages expressed itself through the immigrant press, Park demonstrated that the latter produces a more ambivalent phenomenon than simply the assimilation of immigrants. On the one hand, the immigrant press served as a connecting force, driven by the desire to preserve the mother tongue and culture while at the same time awakening national sentiments that had, until then, remained diffuse. Yet, on the other hand, it facilitated the adjustment of immigrants to the American context. As a result, Park’s work contributes to our understanding of a particular liminal moment inherent within many intercultural contexts, the space between emigrant identity (emphasizing the country of origin) and immigrant identity (emphasizing the newly adopted country). His focus on the role played by media in the socialization of immigrant groups presaged later work on this subject by communication scholars. Focusing attention on Park’s research leads to other studies of the immigrant experience from the same period (e.g., Thomas & Znaniecki, The Polish Peasant in Europe and America), and also to insights on multi-presence and interculturality as significant but often overlooked phenomena in the study of immigrant socialization.
Resumo:
This study aimed to characterize air pollution and the associated carcinogenic risks of polycyclic aromatic hydrocarbon (PAHs) at an urban site, to identify possible emission sources of PAHs using several statistical methodologies, and to analyze the influence of other air pollutants and meteorological variables on PAH concentrations.The air quality and meteorological data were collected in Oporto, the second largest city of Portugal. Eighteen PAHs (the 16 PAHs considered by United States Environment Protection Agency (USEPA) as priority pollutants, dibenzo[a,l]pyrene, and benzo[j]fluoranthene) were collected daily for 24 h in air (gas phase and in particles) during 40 consecutive days in November and December 2008 by constant low-flow samplers and using polytetrafluoroethylene (PTFE) membrane filters for particulate (PM10 and PM2.5 bound) PAHs and pre-cleaned polyurethane foam plugs for gaseous compounds. The other monitored air pollutants were SO2, PM10, NO2, CO, and O3; the meteorological variables were temperature, relative humidity, wind speed, total precipitation, and solar radiation. Benzo[a]pyrene reached a mean concentration of 2.02 ngm−3, surpassing the EU annual limit value. The target carcinogenic risks were equal than the health-based guideline level set by USEPA (10−6) at the studied site, with the cancer risks of eight PAHs reaching senior levels of 9.98×10−7 in PM10 and 1.06×10−6 in air. The applied statistical methods, correlation matrix, cluster analysis, and principal component analysis, were in agreement in the grouping of the PAHs. The groups were formed according to their chemical structure (number of rings), phase distribution, and emission sources. PAH diagnostic ratios were also calculated to evaluate the main emission sources. Diesel vehicular emissions were the major source of PAHs at the studied site. Besides that source, emissions from residential heating and oil refinery were identified to contribute to PAH levels at the respective area. Additionally, principal component regression indicated that SO2, NO2, PM10, CO, and solar radiation had positive correlation with PAHs concentrations, while O3, temperature, relative humidity, and wind speed were negatively correlated.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Química e Biológica Ramo de Processos Químicos
Resumo:
I (Prática Pedagógica) - Neste relatório de estágio apresenta-se uma caracterização do CRP, contextualizando um pouco da sua história, o seu funcionamento e os seus objetivos pedagógicos. Caracterizam-se, também, os alunos que participaram no estágio, destacando o seu percurso académico, as suas influências e motivações musicais. Nas práticas educativas desenvolvidas apresentam-se os princípios pedagógicos, segundo o portal Ponazapino, e os métodos de ensino lecionados durante o ano letivo que tiveram em conta o processo integrado de Ensino/Aprendizagem (Teaching and Learning). Por último apresentam-se os objetivos pedagógicos propostos para cada aluno do estágio. No final efetua-se uma análise crítica da atividade docente destacando o processo ensino/aprendizagem, a sua aplicação e benefícios no desenvolvimento integral do indivíduo.
Resumo:
This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.
Resumo:
Of all of the sources of renewable energies available one can argue that the most abundant and accessible are solar power, radiation, and the energy of the tides (70 % of the earth surface is covered by water). The tidal wave energy hasn’t seen a widespread distribution yet, mainly due to the lack of interest of the governments, most of the coastal areas of the world are exclusive responsibility of the governments, thus not easily open for private venture. Considering solar power, there exist two main fields of application, land based systems and space based systems. The former systems are still in a very embryonic phase, with Japan being the lead researcher in the field, with an experimental satellite-power station to be launched before 2010. Land based systems, on the other hand, are well studied, with major research and application programs in all known forms of solar power production. Given a minimum value of incident radiation, and applying the appropriate system, (i.e. power plant type), for any given area the solar power becomes an income-producing industry.
Resumo:
This paper intends to evaluate the capacity of producing concrete with a pre-established performance (in terms of mechanical strength) incorporating recycled concrete aggregates (RCA) from different sources. To this purpose, rejected products from the precasting industry and concrete produced in laboratory were used. The appraisal of the self-replication capacity was made for three strength ranges: 15-25 MPa, 35-45 MPa and 65-75 MPa. The mixes produced tried to replicate the strength of the source concrete (SC) of the RA. Only total, (100%) replacement of coarse natural aggregates (CNA) by coarse recycled concrete aggregates (CRCA) was tested. The results show that, both in mechanical and durability terms, there were no significant differences between aggregates from controlled sources and those from precast rejects for the highest levels of the target strength. Furthermore, the performance losses resulting from the RA's incorporation are substantially reduced when used medium or high strength SC's. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
In research on Silent Speech Interfaces (SSI), different sources of information (modalities) have been combined, aiming at obtaining better performance than the individual modalities. However, when combining these modalities, the dimensionality of the feature space rapidly increases, yielding the well-known "curse of dimensionality". As a consequence, in order to extract useful information from this data, one has to resort to feature selection (FS) techniques to lower the dimensionality of the learning space. In this paper, we assess the impact of FS techniques for silent speech data, in a dataset with 4 non-invasive and promising modalities, namely: video, depth, ultrasonic Doppler sensing, and surface electromyography. We consider two supervised (mutual information and Fisher's ratio) and two unsupervised (meanmedian and arithmetic mean geometric mean) FS filters. The evaluation was made by assessing the classification accuracy (word recognition error) of three well-known classifiers (knearest neighbors, support vector machines, and dynamic time warping). The key results of this study show that both unsupervised and supervised FS techniques improve on the classification accuracy on both individual and combined modalities. For instance, on the video component, we attain relative performance gains of 36.2% in error rates. FS is also useful as pre-processing for feature fusion. Copyright © 2014 ISCA.