968 resultados para automated assessment
Resumo:
Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.
Resumo:
El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.
Resumo:
Carcinoma of the cervix is one of the most common malignancies. Papanicolaou (Pap) smear tests have reduced mortality by up to 70%. Nevertheless their interpretation is notoriously difficult with high false-negative rates and frequently fatal consequences. We have addressed this problem by using affinity-purified antibodies against human proteins that regulate DNA replication, namely Cdc6 and Mcm5. These antibodies were applied to sections and smears of normal and diseased uterine cervix by using immunoperoxidase or immunofluorescence to detect abnormal precursor malignant cells. Antibodies against Cdc6 and Mcm5 stain abnormal cells in cervical smears and sections with remarkably high specificity and sensitivity. Proliferation markers Ki-67 and proliferating cell nuclear antigen are much less effective. The majority of abnormal precursor malignant cells are stained in both low-grade and high-grade squamous intraepithelial lesions. Immunostaining of cervical smears can be combined with the conventional Pap stain so that all the morphological information from the conventional method is conserved. Thus antibodies against proteins that regulate DNA replication can reduce the high false-negative rate of the Pap smear test and may facilitate mass automated screening.
Resumo:
Diversity-based designing, or the goal of ensuring that web-based information is accessible to as many diverse users as possible, has received growing international acceptance in recent years, with many countries introducing legislation to enforce it. This paper analyses web content accessibility levels in Spanish education portals according to the international guidelines established by the World Wide Web Consortium (W3C) and the Web Accessibility Initiative (WAI). Additionally, it suggests the calculation of an inaccessibility rate as a tool for measuring the degree of non-compliance with WAI Guidelines 2.0 as well as illustrating the significant gap that separates people with disabilities from digital education environments (with a 7.77% average). A total of twenty-one educational web portals with two different web depth levels (42 sampling units) were assessed for this purpose using the automated analysis tool Web Accessibility Test 2.0 (TAW, for its initials in Spanish). The present study reveals a general trend towards non-compliance with the technical accessibility recommendations issued by the W3C-WAI group (97.62% of the websites examined present mistakes in Level A conformance). Furthermore, despite the increasingly high number of legal and regulatory measures about accessibility, their practical application still remains unsatisfactory. A greater level of involvement must be assumed in order to raise awareness and enhance training efforts towards accessibility in the context of collective Information and Communication Technologies (ICTs), since this represents not only a necessity but also an ethical, social, political and legal commitment to be assumed by society.
Resumo:
Federal Transit Administration, Washington, D.C.
Resumo:
Mode of access: Internet.
Resumo:
Left ventricular (LV) volumes have important prognostic implications in patients with chronic ischemic heart disease. We sought to examine the accuracy and reproducibility of real-time 3D echo (RT-3DE) compared to TI-201 single photon emission computed tomography (SPECT) and cardiac magnetic resonance imaging (MRI). Thirty (n = 30) patients (age 62±9 years, 23 men) with chronic ischemic heart disease underwent LV volume assessment with RT-3DE, SPECT, and MRI. Ano vel semi-automated border detection algorithmwas used by RT-3DE. End diastolic volumes (EDV) and end systolic volumes (ESV) measured by RT3DE and SPECT were compared to MRI as the standard of reference. RT-3DE and SPECT volumes showed excellent correlation with MRI (Table). Both RT- 3DE and SPECT underestimated LV volumes compared to MRI (ESV, SPECT 74±58 ml versus RT-3DE 95±48 ml versus MRI 96±54 ml); (EDV, SPECT 121±61 ml versus RT-3DE 169±61 ml versus MRI 179±56 ml). The degree of ESV underestimation with RT-3DE was not significant.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
Focal points Over a six-week period in January and February 2002, 2ml samples were removed from all neonatal PN bags dispensed Samples were submitted for analysis of sodium, potassium and magnesium in triplicate by the hospital's clinical chemistry department using a Vitros Codac 950AT, dry slide, automated analyser Only 19.3, 7.1 and 30.4 per cent of measured sodium, potassium and magnesium concentrations respectively deviated by £5 per cent from stated bag concentrations The results indicate that it is possible that some electrolyte concentrations included in neonatal PN vary significantly from stated values
Resumo:
Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. The purpose of this study is to examine the benefit of adding mfVEP hemifield Intersector analysis protocol to the standard HFA test when there is suspicious glaucomatous visual field loss. 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2, optical coherence tomography of the optic nerve head, and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. The retinal nerve fibre (RNFL) thickness was recorded to identify subjects with suspicious RNFL loss. The hemifield Intersector analysis of mfVEP results showed that signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the 3 groups (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 in glaucoma suspect group (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). Sensitivity and specificity of the HSA protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. The use of SAP and mfVEP results in subjects with suspicious glaucomatous visual field defects, identified by low RNFL thickness, is beneficial in confirming early visual field defects. The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol in addition to SAP analysis can provide information about focal visual field differences across the horizontal midline, and confirm suspicious field defects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss. The Intersector analysis protocol can detect early field changes not detected by standard HFA test.
Resumo:
There is growing popularity in the use of composite indices and rankings for cross-organizational benchmarking. However, little attention has been paid to alternative methods and procedures for the computation of these indices and how the use of such methods may impact the resulting indices and rankings. This dissertation developed an approach for assessing composite indices and rankings based on the integration of a number of methods for aggregation, data transformation and attribute weighting involved in their computation. The integrated model developed is based on the simulation of composite indices using methods and procedures proposed in the area of multi-criteria decision making (MCDM) and knowledge discovery in databases (KDD). The approach developed in this dissertation was automated through an IT artifact that was designed, developed and evaluated based on the framework and guidelines of the design science paradigm of information systems research. This artifact dynamically generates multiple versions of indices and rankings by considering different methodological scenarios according to user specified parameters. The computerized implementation was done in Visual Basic for Excel 2007. Using different performance measures, the artifact produces a number of excel outputs for the comparison and assessment of the indices and rankings. In order to evaluate the efficacy of the artifact and its underlying approach, a full empirical analysis was conducted using the World Bank's Doing Business database for the year 2010, which includes ten sub-indices (each corresponding to different areas of the business environment and regulation) for 183 countries. The output results, which were obtained using 115 methodological scenarios for the assessment of this index and its ten sub-indices, indicated that the variability of the component indicators considered in each case influenced the sensitivity of the rankings to the methodological choices. Overall, the results of our multi-method assessment were consistent with the World Bank rankings except in cases where the indices involved cost indicators measured in per capita income which yielded more sensitive results. Low income level countries exhibited more sensitivity in their rankings and less agreement between the benchmark rankings and our multi-method based rankings than higher income country groups.
Resumo:
An automated on-line SPE-LC-MS/MS method was developed for the quantitation of multiple classes of antibiotics in environmental waters. High sensitivity in the low ng/L range was accomplished by using large volume injections with 10-mL of sample. Positive confirmation of analytes was achieved using two selected reaction monitoring (SRM) transitions per antibiotic and quantitation was performed using an internal standard approach. Samples were extracted using online solid phase extraction, then using column switching technique; extracted samples were immediately passed through liquid chromatography and analyzed by tandem mass spectrometry. The total run time per each sample was 20 min. The statistically calculated method detection limits for various environmental samples were between 1.2 and 63 ng/L. Furthermore, the method was validated in terms of precision, accuracy and linearity. ^ The developed analytical methodology was used to measure the occurrence of antibiotics in reclaimed waters (n=56), surface waters (n=53), ground waters (n=8) and drinking waters (n=54) collected from different parts of South Florida. In reclaimed waters, the most frequently detected antibiotics were nalidixic acid, erythromycin, clarithromycin, azithromycin trimethoprim, sulfamethoxazole and ofloxacin (19.3-604.9 ng/L). Detection of antibiotics in reclaimed waters indicates that they can't be completely removed by conventional wastewater treatment process. Furthermore, the average mass loads of antibiotics released into the local environment through reclaimed water were estimated as 0.248 Kg/day. Among the surface waters samples, Miami River (reaching up to 580 ng/L) and Black Creek canal (up to 124 ng/L) showed highest concentrations of antibiotics. No traces of antibiotics were found in ground waters. On the other hand, erythromycin (monitored as anhydro erythromycin) was detected in 82% of the drinking water samples (n.d-66 ng/L). The developed approach is suitable for both research and monitoring applications.^ Major metabolites of antibiotics in reclaimed wates were identified and quantified using high resolution benchtop Q-Exactive orbitrap mass spectrometer. A phase I metabolite of erythromycin was tentatively identified in full scan based on accurate mass measurement. Using extracted ion chromatogram (XIC), high resolution data-dependent MS/MS spectra and metabolic profiling software the metabolite was identified as desmethyl anhydro erythromycin with molecular formula C36H63NO12 and m/z 702.4423. The molar concentration of the metabolite to erythromycin was in the order of 13 %. To my knowledge, this is the first known report on this metabolite in reclaimed water. Another compound acetyl-sulfamethoxazole, a phase II metabolite of sulfamethoxazole was also identified in reclaimed water and mole fraction of the metabolite represent 36 %, of the cumulative sulfamethoxazole concentration. The results were illustrating the importance to include metabolites also in the routine analysis to obtain a mass balance for better understanding of the occurrence, fate and distribution of antibiotics in the environment. ^ Finally, all the antibiotics detected in reclaimed and surface waters were investigated to assess the potential risk to the aquatic organisms. The surface water antibiotic concentrations that represented the real time exposure conditions revealed that the macrolide antibiotics, erythromycin, clarithromycin and tylosin along with quinolone antibiotic, ciprofloxacin were suspected to induce high toxicity to aquatic biota. Preliminary results showing that, among the antibiotic groups tested, macrolides posed the highest ecological threat, and therefore, they may need to be further evaluated with, long-term exposure studies considering bioaccumulation factors and more number of species selected. Overall, the occurrence of antibiotics in aquatic environment is posing an ecological health concern.^
Resumo:
Our key contribution is a flexible, automated marking system that adds desirable functionality to existing E-Assessment systems. In our approach, any given E-Assessment system is relegated to a data-collection mechanism, whereas marking and the generation and distribution of personalised per-student feedback is handled separately by our own system. This allows content-rich Microsoft Word feedback documents to be generated and distributed to every student simultaneously according to a per-assessment schedule.
The feedback is adaptive in that it corresponds to the answers given by the student and provides guidance on where they may have gone wrong. It is not limited to simple multiple choice which are the most prescriptive question type offered by most E-Assessment Systems and as such most straightforward to mark consistently and provide individual per-alternative feedback strings. It is also better equipped to handle the use of mathematical symbols and images within the feedback documents which is more flexible than existing E-Assessment systems, which can only handle simple text strings.
As well as MCQs the system reliably and robustly handles Multiple Response, Text Matching and Numeric style questions in a more flexible manner than Questionmark: Perception and other E-Assessment Systems. It can also reliably handle multi-part questions where the response to an earlier question influences the answer to a later one and can adjust both scoring and feedback appropriately.
New question formats can be added at any time provided a corresponding marking method conforming to certain templates can also be programmed. Indeed, any question type for which a programmatic method of marking can be devised may be supported by our system. Furthermore, since the student’s response to each is question is marked programmatically, our system can be set to allow for minor deviations from the correct answer, and if appropriate award partial marks.
Resumo:
The popularity of Computing degrees in the UK has been increasing significantly over the past number of years. In Northern Ireland, from 2007 to 2015, there has been a 40% increase in acceptances to Computer Science degrees with England seeing a 60% increase over the same period (UCAS, 2016). However, this is tainted as Computer Science degrees also continue to maintain the highest dropout rates.
In Queen’s University Belfast we currently have a Level 1 intake of over 400 students across a number of computing pathways. Our drive as staff is to empower and motivate the students to fully engage with the course content. All students take a Java programming module the aim of which is to provide an understanding of the basic principles of object-oriented design. In order to assess these skills, we have developed Jigsaw Java as an innovative assessment tool offering intelligent, semi-supervised automated marking of code.
Jigsaw Java allows students to answer programming questions using a drag-and-drop interface to place code fragments into position. Their answer is compared to the sample solution and if it matches, marks are allocated accordingly. However, if a match is not found then the corresponding code is executed using sample data to determine if its logic is acceptable. If it is, the solution is flagged to be checked by staff and if satisfactory is saved as an alternative solution. This means that appropriate marks can be allocated and should another student have submitted the same placement of code fragments this does not need to be executed or checked again. Rather the system now knows how to assess it.
Jigsaw Java is also able to consider partial marks dependent on code placement and will “learn” over time. Given the number of students, Jigsaw Java will improve the consistency and timeliness of marking.
Resumo:
Various environmental management systems, standards and tools are being created to assist companies to become more environmental friendly. However, not all the enterprises have adopted environmental policies in the same scale and range. Additionally, there is no existing guide to help them determine their level of environmental responsibility and subsequently, provide support to enable them to move forward towards environmental responsibility excellence. This research proposes the use of a Belief Rule-Based approach to assess an enterprise’s level commitment to environmental issues. The Environmental Responsibility BRB assessment system has been developed for this research. Participating companies will have to complete a structured questionnaire. An automated analysis of their responses (using the Belief Rule-Based approach) will determine their environmental responsibility level. This is followed by a recommendation on how to progress to the next level. The recommended best practices will help promote understanding, increase awareness, and make the organization greener. BRB systems consist of two parts: Knowledge Base and Inference Engine. The knowledge base in this research is constructed after an in-depth literature review, critical analyses of existing environmental performance assessment models and primarily guided by the EU Draft Background Report on "Best Environmental Management Practice in the Telecommunications and ICT Services Sector". The reasoning algorithm of a selected Drools JBoss BRB inference engine is forward chaining, where an inference starts iteratively searching for a pattern-match of the input and if-then clause. However, the forward chaining mechanism is not equipped with uncertainty handling. Therefore, a decision is made to deploy an evidential reasoning and forward chaining with a hybrid knowledge representation inference scheme to accommodate imprecision, ambiguity and fuzzy types of uncertainties. It is believed that such a system generates well balanced, sensible and Green ICT readiness adapted results, to help enterprises focus on making improvements on more sustainable business operations.