841 resultados para sub-solutions and super-solutions
Resumo:
A selection of PCN congeners was analyzed in pooled blubber samples of pilot whale (Globicephala melas), ringed seal (Phoca hispida), minke whale (Balaenoptera acutorostrata), fin whale (Balaenoptera physalus), harbour porpoise (Phocoena phocoena), hooded seal (Cystophora cristata) and Atlantic whitesided dolphin (Lagenorhynchus acutus), covering a time period of more than 20 years (1986-2009). A large geographical area of the North Atlantic and Arctic areas was covered. PCN congeners 48, 52, 53, 66 and 69 were found in the blubber samples between 0.03 and 5.9 ng/g lw. Also PCBs were analyzed in minke whales and fin whales from Iceland and the total PCN content accounted for 0.2% or less of the total non-planar PCB content. No statistically significant trend in contaminant levels could be established for the studied areas. However, in all species except minke whales caught off Norway the lowest Sum PCN concentrations were found in samples from the latest sampling period.
Resumo:
Macrozooplankton are an important link between higher and lower trophic levels in the oceans. They serve as the primary food for fish, reptiles, birds and mammals in some regions, and play a role in the export of carbon from the surface to the intermediate and deep ocean. Little, however, is known of their global distribution and biomass. Here we compiled a dataset of macrozooplankton abundance and biomass observations for the global ocean from a collection of four datasets. We harmonise the data to common units, calculate additional carbon biomass where possible, and bin the dataset in a global 1 x 1 degree grid. This dataset is part of a wider effort to provide a global picture of carbon biomass data for key plankton functional types, in particular to support the development of marine ecosystem models. Over 387 700 abundance data and 1330 carbon biomass data have been collected from pre-existing datasets. A further 34 938 abundance data were converted to carbon biomass data using species-specific length frequencies or using species-specific abundance to carbon biomass data. Depth-integrated values are used to calculate known epipelagic macrozooplankton biomass concentrations and global biomass. Global macrozooplankton biomass has a mean of 8.4 µg C l-1, median of 0.15 µg C l-1 and a standard deviation of 63.46 µg C l-1. The global annual average estimate of epipelagic macrozooplankton, based on the median value, is 0.02 Pg C. Biomass is highest in the tropics, decreasing in the sub-tropics and increasing slightly towards the poles. There are, however, limitations on the dataset; abundance observations have good coverage except in the South Pacific mid latitudes, but biomass observation coverage is only good at high latitudes. Biomass is restricted to data that is originally given in carbon or to data that can be converted from abundance to carbon. Carbon conversions from abundance are restricted in the most part by the lack of information on the size of the organism and/or the absence of taxonomic information. Distribution patterns of global macrozooplankton biomass and statistical information about biomass concentrations may be used to validate biogeochemical models and Plankton Functional Type models.
Resumo:
A selection of MeO-BDE and BDE congeners were analyzed in pooled blubber samples of pilot whale (Globicephala melas), ringed seal (Phoca hispida), minke whale (Balaenoptera acutorostrata), fin whale (Balaenoptera physalus), harbor porpoise (Phocoena phocoena), hooded seal (Cystophora cristata), and Atlantic white-sided dolphin (Lagenorhynchus acutus), covering a time period of more than 20 years (1986-2009). The analytes were extracted and cleaned-up using open column extraction and multi-layer silica gel column chromatography. The analysis was performed using both low resolution and high resolution GC-MS. MeO-PBDE concentrations relative to total PBDE concentrations varied greatly between sampling periods and species. The highest MeO-PBDE levels were found in the toothed whale species pilot whale and white-sided dolphin, often exceeding the concentration of the most abundant PBDE, BDE-47. The lowest MeO-PBDE levels were found in fin whales and ringed seals. The main MeO-BDE congeners were 6-MeO-BDE47 and 2'-MeO-BDE68. A weak correlation only between BDE47 and its methoxylated analog 6-MeO-BDE47 was found and is indicative of a natural source for MeO-PBDEs.
Resumo:
This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.
Resumo:
The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The modeling needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermalhydraulics modeling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan. From 1987 to 1995, NUPEC performed steady-state and transient critical power and departure from nucleate boiling (DNB) test series based on the equivalent full-size mock-ups. Considering the reliability not only of the measured data, but also other relevant parameters such as the system pressure, inlet sub-cooling and rod surface temperature, these test series supplied the first substantial database for the development of truly mechanistic and consistent models for boiling transition and critical heat flux. Over the last few years the Pennsylvania State University (PSU) under the sponsorship of the U.S. Nuclear Regulatory Commission (NRC) has prepared, organized, conducted and summarized the OECD/NRC Full-size Fine-mesh Bundle Tests (BFBT) Benchmark. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and Japan Nuclear Energy Safety (JNES) organization, Japan. Consequently, the JNES has made available the Boiling Water Reactor (BWR) NUPEC database for the purposes of the benchmark. Based on the success of the OECD/NRC BFBT benchmark the JNES has decided to release also the data based on the NUPEC Pressurized Water Reactor (PWR) subchannel and bundle tests for another follow-up international benchmark entitled OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known subchannel code COBRA-TF, namely CTF, to the critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Over the last few years, the Pennsylvania State University (PSU) under the sponsorship of the US Nuclear Regulatory Commission (NRC) has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT) Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT) Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and the Japan Nuclear Energy Safety (JNES) Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid), namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.
Resumo:
The study area is La Colacha sub-basins from Arroyos Menores basins, natural areas at West and South of Río Cuarto in Province of Córdoba of Argentina, fertile with loess soils and monsoon temperate climate, but with soil erosions including regressive gullies that degrade them progressively. Cultivated gently since some hundred sixty years, coordinated action planning became necessary to conserve lands while keeping good agro-production. The authors had improved data on soils and on hydrology for the study area, evaluated systems of soil uses and actions to be recommended and applied Decision Support Systems (DSS) tools for that, and were conducted to use discrete multi-criteria models (MCDM) for the more global views about soil conservation and hydraulic management actions and about main types of use of soils. For that they used weighted PROMETHEE, ELECTRE, and AHP methods with a system of criteria grouped as environmental, economic and social, and criteria from their data on effects of criteria. The alternatives resulting offer indication for planning depending somehow on sub basins and on selections of weights, but actions for conservation of soils and water management measures are recommended to conserve the basins conditions, actually sensibly degrading, mainly keeping actual uses of the lands.
Resumo:
El actual contexto de fabricación, con incrementos en los precios de la energía, una creciente preocupación medioambiental y cambios continuos en los comportamientos de los consumidores, fomenta que los responsables prioricen la fabricación respetuosa con el medioambiente. El paradigma del Internet de las Cosas (IoT) promete incrementar la visibilidad y la atención prestada al consumo de energía gracias tanto a sensores como a medidores inteligentes en los niveles de máquina y de línea de producción. En consecuencia es posible y sencillo obtener datos de consumo de energía en tiempo real proveniente de los procesos de fabricación, pero además es posible analizarlos para incrementar su importancia en la toma de decisiones. Esta tesis pretende investigar cómo utilizar la adopción del Internet de las Cosas en el nivel de planta de producción, en procesos discretos, para incrementar la capacidad de uso de la información proveniente tanto de la energía como de la eficiencia energética. Para alcanzar este objetivo general, la investigación se ha dividido en cuatro sub-objetivos y la misma se ha desarrollado a lo largo de cuatro fases principales (en adelante estudios). El primer estudio de esta tesis, que se apoya sobre una revisión bibliográfica comprehensiva y sobre las aportaciones de expertos, define prácticas de gestión de la producción que son energéticamente eficientes y que se apoyan de un modo preeminente en la tecnología IoT. Este primer estudio también detalla los beneficios esperables al adoptar estas prácticas de gestión. Además, propugna un marco de referencia para permitir la integración de los datos que sobre el consumo energético se obtienen en el marco de las plataformas y sistemas de información de la compañía. Esto se lleva a cabo con el objetivo último de remarcar cómo estos datos pueden ser utilizados para apalancar decisiones en los niveles de procesos tanto tácticos como operativos. Segundo, considerando los precios de la energía como variables en el mercado intradiario y la disponibilidad de información detallada sobre el estado de las máquinas desde el punto de vista de consumo energético, el segundo estudio propone un modelo matemático para minimizar los costes del consumo de energía para la programación de asignaciones de una única máquina que deba atender a varios procesos de producción. Este modelo permite la toma de decisiones en el nivel de máquina para determinar los instantes de lanzamiento de cada trabajo de producción, los tiempos muertos, cuándo la máquina debe ser puesta en un estado de apagada, el momento adecuado para rearrancar, y para pararse, etc. Así, este modelo habilita al responsable de producción de implementar el esquema de producción menos costoso para cada turno de producción. En el tercer estudio esta investigación proporciona una metodología para ayudar a los responsables a implementar IoT en el nivel de los sistemas productivos. Se incluye un análisis del estado en que se encuentran los sistemas de gestión de energía y de producción en la factoría, así como también se proporcionan recomendaciones sobre procedimientos para implementar IoT para capturar y analizar los datos de consumo. Esta metodología ha sido validada en un estudio piloto, donde algunos indicadores clave de rendimiento (KPIs) han sido empleados para determinar la eficiencia energética. En el cuarto estudio el objetivo es introducir una vía para obtener visibilidad y relevancia a diferentes niveles de la energía consumida en los procesos de producción. El método propuesto permite que las factorías con procesos de producción discretos puedan determinar la energía consumida, el CO2 emitido o el coste de la energía consumida ya sea en cualquiera de los niveles: operación, producto o la orden de fabricación completa, siempre considerando las diferentes fuentes de energía y las fluctuaciones en los precios de la misma. Los resultados muestran que decisiones y prácticas de gestión para conseguir sistemas de producción energéticamente eficientes son posibles en virtud del Internet de las Cosas. También, con los resultados de esta tesis los responsables de la gestión energética en las compañías pueden plantearse una aproximación a la utilización del IoT desde un punto de vista de la obtención de beneficios, abordando aquellas prácticas de gestión energética que se encuentran más próximas al nivel de madurez de la factoría, a sus objetivos, al tipo de producción que desarrolla, etc. Así mismo esta tesis muestra que es posible obtener reducciones significativas de coste simplemente evitando los períodos de pico diario en el precio de la misma. Además la tesis permite identificar cómo el nivel de monitorización del consumo energético (es decir al nivel de máquina), el intervalo temporal, y el nivel del análisis de los datos son factores determinantes a la hora de localizar oportunidades para mejorar la eficiencia energética. Adicionalmente, la integración de datos de consumo energético en tiempo real con datos de producción (cuando existen altos niveles de estandarización en los procesos productivos y sus datos) es esencial para permitir que las factorías detallen la energía efectivamente consumida, su coste y CO2 emitido durante la producción de un producto o componente. Esto permite obtener una valiosa información a los gestores en el nivel decisor de la factoría así como a los consumidores y reguladores. ABSTRACT In today‘s manufacturing scenario, rising energy prices, increasing ecological awareness, and changing consumer behaviors are driving decision makers to prioritize green manufacturing. The Internet of Things (IoT) paradigm promises to increase the visibility and awareness of energy consumption, thanks to smart sensors and smart meters at the machine and production line level. Consequently, real-time energy consumption data from the manufacturing processes can be easily collected and then analyzed, to improve energy-aware decision-making. This thesis aims to investigate how to utilize the adoption of the Internet of Things at shop floor level to increase energy–awareness and the energy efficiency of discrete production processes. In order to achieve the main research goal, the research is divided into four sub-objectives, and is accomplished during four main phases (i.e., studies). In the first study, by relying on a comprehensive literature review and on experts‘ insights, the thesis defines energy-efficient production management practices that are enhanced and enabled by IoT technology. The first study also explains the benefits that can be obtained by adopting such management practices. Furthermore, it presents a framework to support the integration of gathered energy data into a company‘s information technology tools and platforms, which is done with the ultimate goal of highlighting how operational and tactical decision-making processes could leverage such data in order to improve energy efficiency. Considering the variable energy prices in one day, along with the availability of detailed machine status energy data, the second study proposes a mathematical model to minimize energy consumption costs for single machine production scheduling during production processes. This model works by making decisions at the machine level to determine the launch times for job processing, idle time, when the machine must be shut down, ―turning on‖ time, and ―turning off‖ time. This model enables the operations manager to implement the least expensive production schedule during a production shift. In the third study, the research provides a methodology to help managers implement the IoT at the production system level; it includes an analysis of current energy management and production systems at the factory, and recommends procedures for implementing the IoT to collect and analyze energy data. The methodology has been validated by a pilot study, where energy KPIs have been used to evaluate energy efficiency. In the fourth study, the goal is to introduce a way to achieve multi-level awareness of the energy consumed during production processes. The proposed method enables discrete factories to specify energy consumption, CO2 emissions, and the cost of the energy consumed at operation, production and order levels, while considering energy sources and fluctuations in energy prices. The results show that energy-efficient production management practices and decisions can be enhanced and enabled by the IoT. With the outcomes of the thesis, energy managers can approach the IoT adoption in a benefit-driven way, by addressing energy management practices that are close to the maturity level of the factory, target, production type, etc. The thesis also shows that significant reductions in energy costs can be achieved by avoiding high-energy price periods in a day. Furthermore, the thesis determines the level of monitoring energy consumption (i.e., machine level), the interval time, and the level of energy data analysis, which are all important factors involved in finding opportunities to improve energy efficiency. Eventually, integrating real-time energy data with production data (when there are high levels of production process standardization data) is essential to enable factories to specify the amount and cost of energy consumed, as well as the CO2 emitted while producing a product, providing valuable information to decision makers at the factory level as well as to consumers and regulators.
Resumo:
Esta pesquisa desenvolve-se na área da Sociologia da Religião e visa apresentar, por meio de levantamentos bibliográficos, históricos e de campo, a transformação da relação entre o sub-campo protestante brasileiro e o esporte, em especial o futebol. Com isso pretende-se demonstrar que a gradual aceitação do esporte em geral e do futebol em particular, nos meios protestantes brasileiros, foi conseqüência do aprofundamento do processo de secularização ocorrido na sociedade brasileira durante o século XX. A aceitação também se deu por conseqüência do desencantamento gradual do sub-campo protestante. Contribuiu para esse processo a dessacralização do tempo, o domingo, o que demonstra uma transformação da cosmovisão protestante. A crescente profissionalização do futebol, cada vez mais entendido como uma atividade legítima pelos protestantes, também contribuiu para sua aceitação. Com um estudo de caso, esta pesquisa apresenta e analisa o grupo que se autodenomina Atletas de Cristo , exemplo da transformação tanto do sub-campo religioso protestante em sua relação com o esporte, quanto das transformações do próprio campo esportivo. Para a compreensão da religião dos Atletas de Cristo, é desenvolvida a etnografia do grupo, a análise de sua literatura própria, de suas relações com os atletas-não-de-Cristo , bem como a sua sacralização do esporte.(AU)
Resumo:
Esta pesquisa desenvolve-se na área da Sociologia da Religião e visa apresentar, por meio de levantamentos bibliográficos, históricos e de campo, a transformação da relação entre o sub-campo protestante brasileiro e o esporte, em especial o futebol. Com isso pretende-se demonstrar que a gradual aceitação do esporte em geral e do futebol em particular, nos meios protestantes brasileiros, foi conseqüência do aprofundamento do processo de secularização ocorrido na sociedade brasileira durante o século XX. A aceitação também se deu por conseqüência do desencantamento gradual do sub-campo protestante. Contribuiu para esse processo a dessacralização do tempo, o domingo, o que demonstra uma transformação da cosmovisão protestante. A crescente profissionalização do futebol, cada vez mais entendido como uma atividade legítima pelos protestantes, também contribuiu para sua aceitação. Com um estudo de caso, esta pesquisa apresenta e analisa o grupo que se autodenomina Atletas de Cristo , exemplo da transformação tanto do sub-campo religioso protestante em sua relação com o esporte, quanto das transformações do próprio campo esportivo. Para a compreensão da religião dos Atletas de Cristo, é desenvolvida a etnografia do grupo, a análise de sua literatura própria, de suas relações com os atletas-não-de-Cristo , bem como a sua sacralização do esporte.(AU)
Resumo:
The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.
Resumo:
A study has been performed on the Cretaceous to Early Miocene succession of the Vrancea Nappe (Outer Carpathians, Romania), based on field reconstruction of the stratigraphic record, mineralogical-petrographic and geochemical analyses. Extra-basinal clastic supply and intra-basinal autochthonous deposits have been differentiated, appearing laterally inter-fingered and/or interbedded. The main clastic petrofacies consist of calcarenites, sub-litharenites, quartzarenites, sub-arkoses, and polygenic conglomerates derived from extra-basinal margins. An alternate internal and external provenance of the different supplies is the result of the paleogeographic re-organization of the basin/margins system due to tectonic activation and exhumation of rising areas. The intra-basinal deposits consist of black shales and siliceous sediments (silexites and cherty beds), evidencing major environmental changes in the Moldavidian Basin. Organic-matter-rich black shales were deposited during anoxic episodes related to sediment starvation and high nutrient influx due to paleogeographic isolation of the basin caused by plate drifting. The black shales display relatively high contents in sub-mature to mature, Type II lipidic organic matter (good oil and gas-prone source rocks) constituting a potentially active petroleum system. The intra-basinal siliceous sediments are related to oxic pelagic or hemipelagic environments under tectonic quiescence conditions although its increase in the Oligocene part of the succession can be correlated with volcanic supplies. The integration of all the data in the “progressive reorientation of convergence direction” Carpathian model, and their consideration in the framework of a foreland basin, led to propose some constrains on the paleogeographic-geodynamic evolutionary model of the Moldavidian Basin from the Late Cretaceous to the Burdigalian.
Resumo:
Este trabalho é parte integrante do Projeto Grande Minas - União pelas Águas, que realizou o Zoneamento Ambiental das Sub-bacias Hidrográficas dos Afluentes Mineiros do Médio Rio Grande. Após a obtenção dos produtos finais do zoneamento, entra-se agora, na fase de “Implementação”, ou seja, de sua aplicação direta para contribuir no processo de gestão dos recursos hídricos das sub-bacias hidrográficas da área de estudo. Neste trabalho estuda-se, especificamente, a sub-bacia hidrográfica do Ribeirão Bocaina, considerada uma das 34 sub-bacias que envolvem a Bacia Hidrográfica dos Afluentes Mineiros do Médio Rio Grande e situa-se no município de Passos-MG, possuindo uma área de 457,9 km². O trabalho busca dar uma contribuição para a gestão dos recursos hídricos nesta sub-bacia, que vem sofrendo com a degradação do seu recurso hídrico. O objetivo é disponibilizar instrumentos cartográficos para subsidiar a gestão dos recursos hídricos e, com isto, contribuir com a sua preservação e uso sustentável. A sistemática metodológica envolve uma análise dos aspectos legais da área de estudo; a construção de um Banco de Dados Digital georreferenciado com informações sobre os meios físico, biótico e socioeconômico da sub-bacia e a produção de uma carta derivada, de cunho interpretativo e de fácil leitura, que possa ser utilizada diretamente pelo gestor público nas tomadas de decisões. A base digital produzida conta com uma série de mapas digitais, dentre eles: mapas climáticos, de solos, declividades, geomorfológico, geológico, de sistemas aquíferos, hidrográfico, uso e ocupação do solo. Com relação à carta derivada, destaca os terrenos que apresentam maior predisposição à alterações diretas nos recursos hídricos.
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014