944 resultados para Data processing methods
Resumo:
This paper aims to assess the impact of environmental noise in the vicinity of primary schools and to analyze its influence in the workplace and in student performance through perceptions and objective evaluation. The subjective evaluation consisted of the application of questionnaires to students and teachers, and the objective assessment consisted of measuring in situ noise levels. The survey covered nine classes located in three primary schools. Statistical Package for Social Sciences was used for data processing and to draw conclusions. Additionally, the relationship of the difference between environmental and background noise levels of each classroom and students with difficulties in hearing the teacherâ s voice was examined. Noise levels in front of the school, the schoolyard, and the most noise-exposed classrooms (occupied and unoccupied) were measured. Indoor noise levels were much higher than World Health Organization (WHO) recommended values: LAeq,30min averaged 70.5 dB(A) in occupied classrooms, and 38.6 dB(A) in unoccupied ones. Measurements of indoor and outdoor noise suggest that noise from the outside (road, schoolyard) affects the background noise level in classrooms but in varying degrees. It was concluded that the façades most exposed to road traffic noise are subjected to values higher than 55.0 dB(A), and noise levels inside the classrooms are mainly due to the schoolyard, students, and the road traffic. The difference between background (LA95,30min) and the equivalent noise levels (LAeq,30min) in occupied classrooms was 19.2 dB(A), which shows that studentsâ activities are a significant source of classroom noise.
Resumo:
Tese de Doutoramento em Biologia Ambiental e Molecular
Resumo:
Dissertação de mestrado em Estudos da Criança (área de especialização em Intervenção Psicossocial com Crianças, Jovens e Famílias)
Resumo:
The main objective of this thesis on flooding was to produce a detailed report on flooding with specific reference to the Clare River catchment. Past flooding in the Clare River catchment was assessed with specific reference to the November 2009 flood event. A Geographic Information System was used to produce a graphical representation of the spatial distribution of the November 2009 flood. Flood risk is prominent within the Clare River catchment especially in the region of Claregalway. The recent flooding events of November 2009 produced significant fluvial flooding from the Clare River. This resulted in considerable flood damage to property. There were also hidden costs such as the economic impact of the closing of the N17 until floodwater subsided. Land use and channel conditions are traditional factors that have long been recognised for their effect on flooding processes. These factors were examined in the context of the Clare River catchment to determine if they had any significant effect on flood flows. Climate change has become recognised as a factor that may produce more significant and frequent flood events in the future. Many experts feel that climate change will result in an increase in the intensity and duration of rainfall in western Ireland. This would have significant implications for the Clare River catchment, which is already vulnerable to flooding. Flood estimation techniques are a key aspect in understanding and preparing for flood events. This study uses methods based on the statistical analysis of recorded data and methods based on a design rainstorm and rainfall-runoff model to estimate flood flows. These provide a mathematical basis to evaluate the impacts of various factors on flooding and also to generate practical design floods, which can be used in the design of flood relief measures. The final element of the thesis includes the author’s recommendations on how flood risk management techniques can reduce existing flood risk in the Clare River catchment. Future implications to flood risk due to factors such as climate change and poor planning practices are also considered.
Resumo:
Description of a costing model developed by digital production librarian to determine the cost to put an item into the Claremont Colleges Digital Library at the Claremont University Consortium. This case study includes variables such as material types and funding sources, data collection methods, and formulas and calculations for analysis. This model is useful for grant applications, cost allocations, and budgeting for digital project coordinators and digital library projects.
Resumo:
Big sports events like the 2008 European Football Championship are a challenge for anti-doping activities, particularly when the sports event is hosted by two different countries and there are two laboratories accredited by the World Anti-Doping Agency. This challenges the logistics of sample collection as well as the chemical analyses, which must be carried out timeously. The following paper discusses the handling of whereabouts information for each athlete and the therapeutic use exemption system, experiences in sample collection and transportation of blood and urine samples, and the results of the chemical analysis in two different accredited laboratories. An overview of the analytical results of blood profiling and growth hormone testing in comparison with the distribution of the normal population is also presented.
Resumo:
OBJECTIVE: To investigate the planning of subgroup analyses in protocols of randomised controlled trials and the agreement with corresponding full journal publications. DESIGN: Cohort of protocols of randomised controlled trial and subsequent full journal publications. SETTING: Six research ethics committees in Switzerland, Germany, and Canada. DATA SOURCES: 894 protocols of randomised controlled trial involving patients approved by participating research ethics committees between 2000 and 2003 and 515 subsequent full journal publications. RESULTS: Of 894 protocols of randomised controlled trials, 252 (28.2%) included one or more planned subgroup analyses. Of those, 17 (6.7%) provided a clear hypothesis for at least one subgroup analysis, 10 (4.0%) anticipated the direction of a subgroup effect, and 87 (34.5%) planned a statistical test for interaction. Industry sponsored trials more often planned subgroup analyses compared with investigator sponsored trials (195/551 (35.4%) v 57/343 (16.6%), P<0.001). Of 515 identified journal publications, 246 (47.8%) reported at least one subgroup analysis. In 81 (32.9%) of the 246 publications reporting subgroup analyses, authors stated that subgroup analyses were prespecified, but this was not supported by 28 (34.6%) corresponding protocols. In 86 publications, authors claimed a subgroup effect, but only 36 (41.9%) corresponding protocols reported a planned subgroup analysis. CONCLUSIONS: Subgroup analyses are insufficiently described in the protocols of randomised controlled trials submitted to research ethics committees, and investigators rarely specify the anticipated direction of subgroup effects. More than one third of statements in publications of randomised controlled trials about subgroup prespecification had no documentation in the corresponding protocols. Definitive judgments regarding credibility of claimed subgroup effects are not possible without access to protocols and analysis plans of randomised controlled trials.
Resumo:
Este trabajo presenta un sistema para detectar y clasificar objetos binarios según la forma de éstos. En el primer paso del procedimiento, se aplica un filtrado para extraer el contorno del objeto. Con la información de los puntos de forma se obtiene un descriptor BSM con características altamente descriptivas, universales e invariantes. En la segunda fase del sistema se aprende y se clasifica la información del descriptor mediante Adaboost y Códigos Correctores de Errores. Se han usado bases de datos públicas, tanto en escala de grises como en color, para validar la implementación del sistema diseñado. Además, el sistema emplea una interfaz interactiva en la que diferentes métodos de procesamiento de imágenes pueden ser aplicados.
Resumo:
El treball presentat suposa una visió general de l'"Endoscopia amb Càpsula de Vídeo Wireless" i la inspecció de sequències de contraccions intestinals amb les últimes tecnologies de visió per computador. Després de la observació preliminar dels fonaments mèdics requerits, la aplicació de visió per computador es presenta en aquestos termes. En essència, aquest treball proveïx una exhaustiva selecció, descripció i avaluació de cert conjunt de mètodes de processament d'imatges respecte a l'anàlisi de moviment, en el entorn de seqüències d'imatges preses amb una càpsula endoscòpica. Finalment, es presenta una aplicació de software per configurar i emprar de forma ràpida i fàcil un entorn experimental.
Resumo:
The development of additional methods for detecting and identifuing Babesia and Plasmodium infections may be useful in disease monitoring, management and control efforts. To preliminarily evaluate sunthetic peptide-based serodiagnosis, a hydrophilic sequence (DDESEFDKEK)was selected from published BabR gene of B. bovis. Immunization of rabbits and cattle with the hemocyanin-conjugated peptide elicited antibody responses that specifically detected both P. falciparum and B. bovis antigens by immunofluorescence and Western blots. Using a dot-ELISA with this peptide, antisera from immunized and naturally-infected cattle, and immunized rodents, were specifically detected. Reactivity was weak and correlated with peptide immunization or infection. DNA-based detection using repetitive DNA was species-specific in dot-blot formats for B. bovis DNA, and in both dot-blot and in situ formats for P. falciparum; a streamlined enzymelinked synthetic DNA assay for P. falciparum detected 30 parasites/mm(cúbicos) from patient blood using either colorimetric (2-15 h color development) or chemiluminescent detection (0.5-6-min. exposures). Serodiagnostic and DNA hybridization methods may be complementary in the respective detection of both chronic and acute infections. However, recent improvements in the polymerase chain reaction (PCR) make feasible a more sensitive and uniform approach to the diagnosis of these and other infectious disease complexes, with appropriate primers and processing methods. An analysis of ribosomal DNA genes of Plasmodium and Toxoplasma identified Apicomplexa-conserved sequence regions. Specific and distinctive PCR profiles were obtained for primers spanning the internal transcribed spacer locus for each of several Plasmodium and Babesia species.
Resumo:
Aquest projecte descriu la fusió de les necessitats diaries de monitorització del experiment ATLAS des del punt de vista del cloud. La idea principal es desenvolupar un conjunt de col·lectors que recullin informació de la distribució i processat de les dades i dels test de wlcg (Service Availability Monitoring), emmagatzemant-la en BBDD específiques per tal de mostrar els resultats en una sola pàgina HLM (High Level Monitoring). Un cop aconseguit, l’aplicació ha de permetre investigar més enllà via interacció amb el front-end, el qual estarà alimentat per les estadístiques emmagatzemades a la BBDD.
Multimodel inference and multimodel averaging in empirical modeling of occupational exposure levels.
Resumo:
Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.
Resumo:
Aquest projecte abasta el disseny i el desenvolupament d’un model prototípic de Metodologia per a la Valoració de l’Aprenentatge Ambiental, a la qual anomenem “MEVA-Ambiental”. Per a fer possible aquesta fita ens hem basat en fonaments ontològics i constructivistes per representar i analitzar el coneixement a fi de poder quantificar l’Increment de Coneixement (IC). Per nosaltres l’IC esdevé un indicador socio-educatiu que ens servirà per a determinar l’efectivitat dels tallers d’educació ambiental en percentatge. En procedir d’aquesta manera, les qualificacions resultats poden es poden prendre com punt de partida per a desenvolupar estudis en el temps i comprendre com “s’ancora” el nou coneixement a l’estructura cognitiva dels aprenents. Més enllà del plantejament teòric de mètode, també proveïm la solució tècnica que mostra com n’és de funcional i d’aplicable la part empírica metodològica. A aquesta solució que hem anomenat “MEVA-Tool”, és una eina virtual que automatitza la recollida i tractament de dades amb una estructura dinàmica basada en “qüestionaris web” que han d’emplenar els estudiants, una “base de dades” que acumula la informació i en permet un filtratge selectiu, i més “Llibre Excel” que en fa el tractament informatiu, la representació gràfica dels resultats, l’anàlisi i conclusions.