699 resultados para data warehouse tuning aggregato business intelligence performance
Resumo:
Comparisons of climate model hindcasts with independent proxy data are essential for assessing model performance in non-analogue situations. However, standardized palaeoclimate data sets for assessing the spatial pattern of past climatic change across continents are lacking for some of the most dynamic episodes of Earth’s recent past. Here we present a new chironomid-based palaeotemperature dataset designed to assess climate model hindcasts of regional summer temperature change in Europe during the late-glacial and early Holocene. Latitudinal and longitudinal patterns of inferred temperature change are in excellent agreement with simulations by the ECHAM-4 model, implying that atmospheric general circulation models like ECHAM-4 can successfully predict regionally diverging temperature trends in Europe, even when conditions differ significantly from present. However, ECHAM-4 infers larger amplitudes of change and higher temperatures during warm phases than our palaeotemperature estimates, suggesting that this and similar models may overestimate past and potentially also future summer temperature changes in Europe.
Resumo:
Companion animals closely share their domestic environment with people and have the potential to, act as sources of zoonotic diseases. They also have the potential to be sentinels of infectious and noninfectious, diseases. With the exception of rabies, there has been minimal ongoing surveillance of, companion animals in Canada. We developed customized data extraction software, the University of, Calgary Data Extraction Program (UCDEP), to automatically extract and warehouse the electronic, medical records (EMR) from participating private veterinary practices to make them available for, disease surveillance and knowledge creation for evidence-based practice. It was not possible to build, generic data extraction software; the UCDEP required customization to meet the specific software, capabilities of the veterinary practices. The UCDEP, tailored to the participating veterinary practices', management software, was capable of extracting data from the EMR with greater than 99%, completeness and accuracy. The experiences of the people developing and using the UCDEP and the, quality of the extracted data were evaluated. The electronic medical record data stored in the data, warehouse may be a valuable resource for surveillance and evidence-based medical research.
Resumo:
Large amounts of animal health care data are present in veterinary electronic medical records (EMR) and they present an opportunity for companion animal disease surveillance. Veterinary patient records are largely in free-text without clinical coding or fixed vocabulary. Text-mining, a computer and information technology application, is needed to identify cases of interest and to add structure to the otherwise unstructured data. In this study EMR's were extracted from veterinary management programs of 12 participating veterinary practices and stored in a data warehouse. Using commercially available text-mining software (WordStat™), we developed a categorization dictionary that could be used to automatically classify and extract enteric syndrome cases from the warehoused electronic medical records. The diagnostic accuracy of the text-miner for retrieving cases of enteric syndrome was measured against human reviewers who independently categorized a random sample of 2500 cases as enteric syndrome positive or negative. Compared to the reviewers, the text-miner retrieved cases with enteric signs with a sensitivity of 87.6% (95%CI, 80.4-92.9%) and a specificity of 99.3% (95%CI, 98.9-99.6%). Automatic and accurate detection of enteric syndrome cases provides an opportunity for community surveillance of enteric pathogens in companion animals.
Resumo:
The software Pan2Applic is a tool to convert files or folders of files (ascii/tab-separated data files with or without metaheader), downloaded from PANGAEA via the search engine or the data warehouse to formats as used by applications, e.g. for visualization or further processing. It may also be used to convert files or zip-archives as downloaded from CD-ROM data collections, published in the WDC-MARE Reports series. Pan2Applic is distributed as freeware for the operating systems Microsoft Windows, Apple OS X and Linux.
Resumo:
Although there has been a lot of interest in recognizing and understanding air traffic control (ATC) speech, none of the published works have obtained detailed field data results. We have developed a system able to identify the language spoken and recognize and understand sentences in both Spanish and English. We also present field results for several in-tower controller positions. To the best of our knowledge, this is the first time that field ATC speech (not simulated) is captured, processed, and analyzed. The use of stochastic grammars allows variations in the standard phraseology that appear in field data. The robust understanding algorithm developed has 95% concept accuracy from ATC text input. It also allows changes in the presentation order of the concepts and the correction of errors created by the speech recognition engine improving it by 17% and 25%, respectively, absolute in the percentage of fully correctly understood sentences for English and Spanish in relation to the percentages of fully correctly recognized sentences. The analysis of errors due to the spontaneity of the speech and its comparison to read speech is also carried out. A 96% word accuracy for read speech is reduced to 86% word accuracy for field ATC data for Spanish for the "clearances" task confirming that field data is needed to estimate the performance of a system. A literature review and a critical discussion on the possibilities of speech recognition and understanding technology applied to ATC speech are also given.
Resumo:
The properties of data and activities in business processes can be used to greatly facilítate several relevant tasks performed at design- and run-time, such as fragmentation, compliance checking, or top-down design. Business processes are often described using workflows. We present an approach for mechanically inferring business domain-specific attributes of workflow components (including data Ítems, activities, and elements of sub-workflows), taking as starting point known attributes of workflow inputs and the structure of the workflow. We achieve this by modeling these components as concepts and applying sharing analysis to a Horn clause-based representation of the workflow. The analysis is applicable to workflows featuring complex control and data dependencies, embedded control constructs, such as loops and branches, and embedded component services.
Resumo:
RESUMEN Las enfermedades cardiovasculares constituyen en la actualidad la principal causa de mortalidad en el mundo y se prevé que sigan siéndolo en un futuro, generando además elevados costes para los sistemas de salud. Los dispositivos cardiacos implantables constituyen una de las opciones para el diagnóstico y el tratamiento de las alteraciones del ritmo cardiaco. La investigación clínica con estos dispositivos alcanza gran relevancia para combatir estas enfermedades que tanto afectan a nuestra sociedad. Tanto la industria farmacéutica y de tecnología médica, como los propios investigadores, cada día se ven involucrados en un mayor número de proyectos de investigación clínica. No sólo el incremento en su volumen, sino el aumento de la complejidad, están generando mayores gastos en las actividades asociadas a la investigación médica. Esto está conduciendo a las compañías del sector sanitario a estudiar nuevas soluciones que les permitan reducir los costes de los estudios clínicos. Las Tecnologías de la Información y las Comunicaciones han facilitado la investigación clínica, especialmente en la última década. Los sistemas y aplicaciones electrónicos han proporcionado nuevas posibilidades en la adquisición, procesamiento y análisis de los datos. Por otro lado, la tecnología web propició la aparición de los primeros sistemas electrónicos de adquisición de datos, que han ido evolucionando a lo largo de los últimos años. Sin embargo, la mejora y perfeccionamiento de estos sistemas sigue siendo crucial para el progreso de la investigación clínica. En otro orden de cosas, la forma tradicional de realizar los estudios clínicos con dispositivos cardiacos implantables precisaba mejorar el tratamiento de los datos almacenados por estos dispositivos, así como para su fusión con los datos clínicos recopilados por investigadores y pacientes. La justificación de este trabajo de investigación se basa en la necesidad de mejorar la eficiencia en la investigación clínica con dispositivos cardiacos implantables, mediante la reducción de costes y tiempos de desarrollo de los proyectos, y el incremento de la calidad de los datos recopilados y el diseño de soluciones que permitan obtener un mayor rendimiento de los datos mediante la fusión de datos de distintas fuentes o estudios. Con este fin se proponen como objetivos específicos de este proyecto de investigación dos nuevos modelos: - Un modelo de recuperación y procesamiento de datos para los estudios clínicos con dispositivos cardiacos implantables, que permita estructurar y estandarizar estos procedimientos, con el fin de reducir tiempos de desarrollo Modelos de Métrica para Sistemas Electrónicos de Adquisición de Datos y de Procesamiento para Investigación Clínica con Dispositivos Cardiacos Implantables de estas tareas, mejorar la calidad del resultado obtenido, disminuyendo en consecuencia los costes. - Un modelo de métrica integrado en un Sistema Electrónico de Adquisición de Datos (EDC) que permita analizar los resultados del proyecto de investigación y, particularmente del rendimiento obtenido del EDC, con el fin de perfeccionar estos sistemas y reducir tiempos y costes de desarrollo del proyecto y mejorar la calidad de los datos clínicos recopilados. Como resultado de esta investigación, el modelo de procesamiento propuesto ha permitido reducir el tiempo medio de procesamiento de los datos en más de un 90%, los costes derivados del mismo en más de un 85% y todo ello, gracias a la automatización de la extracción y almacenamiento de los datos, consiguiendo una mejora de la calidad de los mismos. Por otro lado, el modelo de métrica posibilita el análisis descriptivo detallado de distintos indicadores que caracterizan el rendimiento del proyecto de investigación clínica, haciendo factible además la comparación entre distintos estudios. La conclusión de esta tesis doctoral es que los resultados obtenidos han demostrado que la utilización en estudios clínicos reales de los dos modelos desarrollados ha conducido a una mejora en la eficiencia de los proyectos, reduciendo los costes globales de los mismos, disminuyendo los tiempos de ejecución, e incrementando la calidad de los datos recopilados. Las principales aportaciones de este trabajo de investigación al conocimiento científico son la implementación de un sistema de procesamiento inteligente de los datos almacenados por los dispositivos cardiacos implantables, la integración en el mismo de una base de datos global y optimizada para todos los modelos de dispositivos, la generación automatizada de un repositorio unificado de datos clínicos y datos de dispositivos cardiacos implantables, y el diseño de una métrica aplicada e integrable en los sistemas electrónicos de adquisición de datos para el análisis de resultados de rendimiento de los proyectos de investigación clínica. ABSTRACT Cardiovascular diseases are the main cause of death worldwide and it is expected to continue in the future, generating high costs for health care systems. Implantable cardiac devices have become one of the options for diagnosis and treatment of cardiac rhythm disorders. Clinical research with these devices has acquired great importance to fight against these diseases that affect so many people in our society. Both pharmaceutical and medical technology companies, and also investigators, are involved in an increasingly number of clinical research projects. The growth in volume and the increase in medical research complexity are contributing to raise the expenditure level associated with clinical investigation. This situation is driving health care sector companies to explore new solutions to reduce clinical trial costs. Information and Communication Technologies have facilitated clinical research, mainly in the last decade. Electronic systems and software applications have provided new possibilities in the acquisition, processing and analysis of clinical studies data. On the other hand, web technology contributed to the appearance of the first electronic data capture systems that have evolved during the last years. Nevertheless, improvement of these systems is still a key aspect for the progress of clinical research. On a different matter, the traditional way to develop clinical studies with implantable cardiac devices needed an improvement in the processing of the data stored by these devices, and also in the merging of these data with the data collected by investigators and patients. The rationale of this research is based on the need to improve the efficiency in clinical investigation with implantable cardiac devices, by means of reduction in costs and time of projects development, as well as improvement in the quality of information obtained from the studies and to obtain better performance of data through the merging of data from different sources or trials. The objective of this research project is to develop the next two models: • A model for the retrieval and processing of data for clinical studies with implantable cardiac devices, enabling structure and standardization of these procedures, in order to reduce the time of development of these tasks, to improve the quality of the results, diminish therefore costs. • A model of metric integrated in an Electronic Data Capture system (EDC) that allow to analyze the results of the research project, and particularly the EDC performance, in order to improve those systems and to reduce time and costs of the project, and to get a better quality of the collected clinical data. As a result of this work, the proposed processing model has led to a reduction of the average time for data processing by more than 90 per cent, of related costs by more than 85 per cent, and all of this, through automatic data retrieval and storage, achieving an improvement of quality of data. On the other hand, the model of metrics makes possible a detailed descriptive analysis of a set of indicators that characterize the performance of each research project, allowing inter‐studies comparison. This doctoral thesis results have demonstrated that the application of the two developed models in real clinical trials has led to an improvement in projects efficiency, reducing global costs, diminishing time in execution, and increasing quality of data collected. The main contributions to scientific knowledge of this research work are the implementation of an intelligent processing system for data stored by implantable cardiac devices, the integration in this system of a global and optimized database for all models of devices, the automatic creation of an unified repository of clinical data and data stored by medical devices, and the design of a metric to be applied and integrated in electronic data capture systems to analyze the performance results of clinical research projects.
Resumo:
Durante los últimos años, el imparable crecimiento de fuentes de datos biomédicas, propiciado por el desarrollo de técnicas de generación de datos masivos (principalmente en el campo de la genómica) y la expansión de tecnologías para la comunicación y compartición de información ha propiciado que la investigación biomédica haya pasado a basarse de forma casi exclusiva en el análisis distribuido de información y en la búsqueda de relaciones entre diferentes fuentes de datos. Esto resulta una tarea compleja debido a la heterogeneidad entre las fuentes de datos empleadas (ya sea por el uso de diferentes formatos, tecnologías, o modelizaciones de dominios). Existen trabajos que tienen como objetivo la homogeneización de estas con el fin de conseguir que la información se muestre de forma integrada, como si fuera una única base de datos. Sin embargo no existe ningún trabajo que automatice de forma completa este proceso de integración semántica. Existen dos enfoques principales para dar solución al problema de integración de fuentes heterogéneas de datos: Centralizado y Distribuido. Ambos enfoques requieren de una traducción de datos de un modelo a otro. Para realizar esta tarea se emplean formalizaciones de las relaciones semánticas entre los modelos subyacentes y el modelo central. Estas formalizaciones se denominan comúnmente anotaciones. Las anotaciones de bases de datos, en el contexto de la integración semántica de la información, consisten en definir relaciones entre términos de igual significado, para posibilitar la traducción automática de la información. Dependiendo del problema en el que se esté trabajando, estas relaciones serán entre conceptos individuales o entre conjuntos enteros de conceptos (vistas). El trabajo aquí expuesto se centra en estas últimas. El proyecto europeo p-medicine (FP7-ICT-2009-270089) se basa en el enfoque centralizado y hace uso de anotaciones basadas en vistas y cuyas bases de datos están modeladas en RDF. Los datos extraídos de las diferentes fuentes son traducidos e integrados en un Data Warehouse. Dentro de la plataforma de p-medicine, el Grupo de Informática Biomédica (GIB) de la Universidad Politécnica de Madrid, en el cuál realicé mi trabajo, proporciona una herramienta para la generación de las necesarias anotaciones de las bases de datos RDF. Esta herramienta, denominada Ontology Annotator ofrece la posibilidad de generar de manera manual anotaciones basadas en vistas. Sin embargo, aunque esta herramienta muestra las fuentes de datos a anotar de manera gráfica, la gran mayoría de usuarios encuentran difícil el manejo de la herramienta , y pierden demasiado tiempo en el proceso de anotación. Es por ello que surge la necesidad de desarrollar una herramienta más avanzada, que sea capaz de asistir al usuario en el proceso de anotar bases de datos en p-medicine. El objetivo es automatizar los procesos más complejos de la anotación y presentar de forma natural y entendible la información relativa a las anotaciones de bases de datos RDF. Esta herramienta ha sido denominada Ontology Annotator Assistant, y el trabajo aquí expuesto describe el proceso de diseño y desarrollo, así como algunos algoritmos innovadores que han sido creados por el autor del trabajo para su correcto funcionamiento. Esta herramienta ofrece funcionalidades no existentes previamente en ninguna otra herramienta del área de la anotación automática e integración semántica de bases de datos. ---ABSTRACT---Over the last years, the unstoppable growth of biomedical data sources, mainly thanks to the development of massive data generation techniques (specially in the genomics field) and the rise of the communication and information sharing technologies, lead to the fact that biomedical research has come to rely almost exclusively on the analysis of distributed information and in finding relationships between different data sources. This is a complex task due to the heterogeneity of the sources used (either by the use of different formats, technologies or domain modeling). There are some research proyects that aim homogenization of these sources in order to retrieve information in an integrated way, as if it were a single database. However there is still now work to automate completely this process of semantic integration. There are two main approaches with the purpouse of integrating heterogeneous data sources: Centralized and Distributed. Both approches involve making translation from one model to another. To perform this task there is a need of using formalization of the semantic relationships between the underlying models and the main model. These formalizations are also calles annotations. In the context of semantic integration of the information, data base annotations consist on defining relations between concepts or words with the same meaning, so the automatic translation can be performed. Depending on the task, the ralationships can be between individuals or between whole sets of concepts (views). This paper focuses on the latter. The European project p-medicine (FP7-ICT-2009-270089) is based on the centralized approach. It uses view based annotations and RDF modeled databases. The data retireved from different data sources is translated and joined into a Data Warehouse. Within the p-medicine platform, the Biomedical Informatics Group (GIB) of the Polytechnic University of Madrid, in which I worked, provides a software to create annotations for the RDF sources. This tool, called Ontology Annotator, is used to create annotations manually. However, although Ontology Annotator displays the data sources graphically, most of the users find it difficult to use this software, thus they spend too much time to complete the task. For this reason there is a need to develop a more advanced tool, which would be able to help the user in the task of annotating p-medicine databases. The aim is automating the most complex processes of the annotation and display the information clearly and easy understanding. This software is called Ontology Annotater Assistant and this book describes the process of design and development of it. as well as some innovative algorithms that were designed by the author of the work. This tool provides features that no other software in the field of automatic annotation can provide.
Resumo:
Mediante técnicas geofísicas (S.E.V.) y la caracterización hidroquímica de las aguas (determinación de aniones predominantes), se han obtenido datos sobre la estratigrafía y el funcionamiento de la Laguna efímera de El Hito (Cuenca), situada sobre yesos, y próxima al vecino emplazamiento del Almacén Temporal Centralizado (ATC) en Villar de Cañas. Asimismo, a partir del estudio del registro sedimentario de un sondeo manual en el que se recogieron muestras en las que se determinaron los compuestos orgánicos se realizó la reconstrucción de las condiciones paleoambientales. ABSTRACT Through geophysics techniques like vertical and electrical drillings, and hydrochemistry characterization of the waters (development of main anions), it has been obtained some data related to the stratigraphy and performance of the ephemeral lake in “El Hito”, Cuenca, Spain. El Hito´s lake is placed on a basin of plasters and it is next to site of the Centralized Temporary Storage in Villar de Cañas, Cuenca, Spain. Additionally, manual drillings will be conducted by collecting samples to determine the organic compounds in order to carry out a reconstruction of the paleoenvironmental conditions.
Resumo:
A ciência tem feito uso frequente de recursos computacionais para execução de experimentos e processos científicos, que podem ser modelados como workflows que manipulam grandes volumes de dados e executam ações como seleção, análise e visualização desses dados segundo um procedimento determinado. Workflows científicos têm sido usados por cientistas de várias áreas, como astronomia e bioinformática, e tendem a ser computacionalmente intensivos e fortemente voltados à manipulação de grandes volumes de dados, o que requer o uso de plataformas de execução de alto desempenho como grades ou nuvens de computadores. Para execução dos workflows nesse tipo de plataforma é necessário o mapeamento dos recursos computacionais disponíveis para as atividades do workflow, processo conhecido como escalonamento. Plataformas de computação em nuvem têm se mostrado um alternativa viável para a execução de workflows científicos, mas o escalonamento nesse tipo de plataforma geralmente deve considerar restrições específicas como orçamento limitado ou o tipo de recurso computacional a ser utilizado na execução. Nesse contexto, informações como a duração estimada da execução ou limites de tempo e de custo (chamadas aqui de informações de suporte ao escalonamento) são importantes para garantir que o escalonamento seja eficiente e a execução ocorra de forma a atingir os resultados esperados. Este trabalho identifica as informações de suporte que podem ser adicionadas aos modelos de workflows científicos para amparar o escalonamento e a execução eficiente em plataformas de computação em nuvem. É proposta uma classificação dessas informações, e seu uso nos principais Sistemas Gerenciadores de Workflows Científicos (SGWC) é analisado. Para avaliar o impacto do uso das informações no escalonamento foram realizados experimentos utilizando modelos de workflows científicos com diferentes informações de suporte, escalonados com algoritmos que foram adaptados para considerar as informações inseridas. Nos experimentos realizados, observou-se uma redução no custo financeiro de execução do workflow em nuvem de até 59% e redução no makespan chegando a 8,6% se comparados à execução dos mesmos workflows sendo escalonados sem nenhuma informação de suporte disponível.
Resumo:
From the Introduction. The main focus of this study is to examine whether the euro has been an economic, monetary, fiscal, and social stabilizer for the Eurozone. In order to do this, the underpinnings of the euro are analysed, and the requirements and benchmarks that have to be achieved, maintained, and respected are tested against the data found in three major statistics data sources: the European Central Bank’s Statistics Data Warehouse (http://sdw.ecb.europa.eu/), Economagic (www.economagic.com), and E-signal. The purpose of this work is to analyse if the euro was a stabilizing factor from its inception to the break of the financial crisis in summer 2008 in the European Union. To answer this question, this study analyses a number of indexes to understand the impact of the euro in three markets: (1) the foreign exchange market, (2) the stock market, and the Crude Oil and commodities markets, (3) the money market.
Resumo:
This paper examines the effect of the decoupling of farm direct payments upon the off-farm labour supply decisions of farmers in both Ireland and Italy, using panel data from the Farm Business Survey (REA) and FADN database covering the period from 2002 to 2009 to model these decisions. Drawing from the conceptual agricultural household model, the authors hypothesise that the decoupling of direct payments led to an increase in off-farm labour activity despite some competing factors. This hypothesis rests largely upon the argument that the effects of changes in relative wages have dominated other factors. At a micro level, the decoupling-induced decline in the farm wage relative to the non-farm wage ought to have provoked a greater incentive for off-farm labour supply. The main known competing argument is that decoupling introduced a new source of non-labour income i.e. a wealth effect. This may in turn have suppressed or eliminated the likelihood of increased off-farm labour supply for some farmers. For the purposes of comparative analysis, the Italian model utilises the data from the REA database instead of the FADN as the latter has a less than satisfactory coverage of labour issues. Both models are developed at a national level. The paper draws from the literature on female labour supply and uses a sample selection corrected ordinary least squares model to examine both the decisions of off-farm work participation and the decisions regarding the amount of time spent working off-farm. The preliminary results indicate that decoupling has not had a significant impact on off-farm labour supply in the case of Ireland but there appears to be a significantly negative relationship in the Italian case. It still remains the case in both countries that the wealth of the farmer is negatively correlated with the likelihood of off-farm employment.
Resumo:
This paper analyses the factors affecting off-farm labour decisions of Italian farm operators. Using micro-level data from the Farm Business Survey (REA) over the pre- and post-2003 CAP reform periods, we investigated the impact that operator, family, farm and market characteristics exert on these choices. Among other things, the paper focuses also on the differential impact of those variables for operators of smaller and larger holdings. The main results suggest that operator and family characteristics have a significant impact on the decision to participate in off-farm work more for smaller than for bigger farms. By contrast, farm characteristics are more relevant variables for bigger farms. In particular, decoupled farm payments, by increasing the marginal productivity of farm labour, lower the probability of working off the farm only in bigger farms, while coupled subsidies in pre-reform years do not have a significant impact on labour decisions. Finally, we show that, after accounting for the standard covariates, local and territorial labour market characteristics generally have a low effect on off-farm work operators’ choices.
Resumo:
O objetivo desta tese é compreender, a partir das categorias trabalho e consumo, como se constituem as relações das redes de colaboração da empresa de tendências de consumo Trendwatching. No âmbito acadêmico, literatura recente revela o aparecimento de novos conceitos, como prosumer, co-criação e públicos produtivos para explicar as transformações no mundo do trabalho que envolvem cada vez mais a participação do consumidor para que o valor se realize. Dessa forma, os fundamentos teóricos que embasam esta tese providenciam elementos sobre os conceitos de valor, trabalho imaterial, consumo e suas interrelações. A coleta de dados ocorreu, em sua maior parte, na matriz da empresa em Londres durante o ano de 2015, sendo composta por: (1) realização de 31 entrevistas semi-estruturadas com spotters, funcionários e clientes da empresa; (2) observação em campo durante 3 meses, período este registrado em um diário de campo; (3) dados obtidos por meios virtuais, através do site da Trendwatching. Os dados foram analisados por meio da Análise de Conteúdo, onde, a partir de um processo de derivação, foram encontradas 49 categorias iniciais, 10 intermediárias e 3 finais. Por meio de um processo de derivação, chegou-se em 10 categorias intermediárias: (1) quem é o spotter; (2) busca de informações pelo spotter; (3) motivação e recompensa dos spotters; (4) spotters e a comunidade TW:IN; (5) formação dos spotters; (6) imagem da Trendwatching; (7) Ambiente de trabalho; (8) O que a Trendwatching vende; (9) base de dados; (10) tendências. Com estas categorias intermediárias em mãos, realizou-se novamente um processo de derivação para chegar nas categorias finais, que são: (1) spotters; (2) trabalho; (3) informação. Os resultados da pesquisa permitem mostrar que o spotter – assim chamado o indivíduo que compõe a rede a colaboração da Trendwatching – é o principal produto/serviço vendido pela empresa. A partir das categorias finais, retorna-se à pergunta de pesquisa, de modo a providenciar contribuições da tese para o campo, que são: (a) ampliar a discussão sobre criação de valor em Estudos Organizacionais, identificando diferentes conceitos e novas formas de apropriação do valor pelo capital implicados nas interações e interfaces entre trabalho e consumo; (b) demonstrar como a operacionalização da Análise de Conteúdo pode auxiliar na organização de dados empíricos virtuais (análise do site); (c) estimular que Estudos de Caso sejam, com mais frequência, realizados em organizações cujo trabalho seja imaterial por excelência.
Resumo:
The 'season of birth' effect is one of the most consistently replicated associations in schizophrenia epidemiology. In contrast, the association between season of birth and development in the general Population is relatively poorly understood. The aim of this study was to explore the impact of season of birth on various anthropometric and neurocognitive variables from birth to age seven in a large, community-based birth cohort. A sample of white singleton infants born after 37 weeks gestation (n =22,123) was drawn from the US Collaborative Perinatal Project. Anthropometric variables (weight, head circumference, length/height) and various measures of neurocognitive development, were assessed at birth, 8 months, 4 and 7 years of age. Compared to surnmer/autumn born infants, winter/spring born infants were significantly longer at birth, and at age seven were significantly heavier, taller and had larger head circumference. Winter/spring born infants were achieving significantly higher scores on the Bayley Motor Score at 8 months, the Graham-Ernhart Block Test at age 4, the Wechsler Intelligence Performance and Full Scale scores at age 7, but had significantly lower scores on the Bender-Gestalt Test at age 7 years. Winter/spring birth, while associated with an increased risk of schizophrenia, is generally associated with superior outcomes with respect to physical and cognitive development. (c) 2005 Elsevier B.V. All rights reserved.