913 resultados para Computational routines
Resumo:
Perception of self as a non-reader has been identified as one of the factors why poor readers disengage from the reading process (Strang, 1967; Rosow, 1992), thus impeding progress. Perception and informational processes influence judgments of personal efficacy (Bandura, 1997). The student's sense of reading efficacy that influence effort expenditure and ultimately achievement, is often overlooked (Athey, 1985; Pajares, 1996). Academic routines within educational programs are implemented without adequate information on whether routines promote or impede efficacy growth. Cross-age tutoring, a process known to improve participants' academic achievement, motivation, and provide opportunities for authentic reading practice, has been successfully incorporated into reading instruction designs (Allen, 1976; Cohen, Kulik & Kulik, 1982; Labbo & Teale, 1990; Riessman, 1993). This study investigated the impact teacher-designed routines within a cross-age tutoring model, have on the tutor's sense of reading self-efficacy. ^ The Reader Self-Perception Scale (Henk & Melnick, 1992) was administered, pre- and post-treatment, to 118 fifth grade students. Preceding the initial survey administration intact classes were randomly assigned to 1 of 3 commonly utilized cross-age tutoring routines or designated as the non-treatment population. The data derived from the Reader Self-Perception Scale was analyzed using an analysis of covariance (ANCOVA). Results indicated that participation as a cross-age tutor does not significantly increase the tutor's perception of self as reader in 1 or more of the 4 modes of information influencing self-efficacy as compared to the non-treatment group. ^ The results of this study suggests that although a weekly tutoring session that delivers educationally credible routines impacts achievement and motivation, efficacy effect was not evident. Possible explanation and recommendations for future studies are proposed. ^
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
This work consists of the conception, developing and implementation of a Computational Routine CAE which has algorithms suitable for the tension and deformation analysis. The system was integrated to an academic software named as OrtoCAD. The expansion algorithms for the interface CAE genereated by this work were developed in FORTRAN with the objective of increase the applications of two former works of PPGEM-UFRN: project and fabrication of a Electromechanincal reader and Software OrtoCAD. The software OrtoCAD is an interface that, orinally, includes the visualization of prothetic cartridges from the data obtained from a electromechanical reader (LEM). The LEM is basically a tridimensional scanner based on reverse engineering. First, the geometry of a residual limb (i.e., the remaining part of an amputee leg wherein the prothesis is fixed) is obtained from the data generated by LEM by the use of Reverse Engineering concepts. The proposed core FEA uses the Shell's Theory where a 2D surface is generated from a 3D piece form OrtoCAD. The shell's analysis program uses the well-known Finite Elements Method to describe the geometry and the behavior of the material. The program is based square-based Lagragean elements of nine nodes and displacement field of higher order to a better description of the tension field in the thickness. As a result, the new FEA routine provide excellent advantages by providing new features to OrtoCAD: independency of high cost commercial softwares; new routines were added to the OrtoCAD library for more realistic problems by using criteria of fault engineering of composites materials; enhanced the performance of the FEA analysis by using a specific grid element for a higher number of nodes; and finally, it has the advantage of open-source project and offering customized intrinsic versatility and wide possibilities of editing and/or optimization that may be necessary in the future
Resumo:
The early onset of mental disorders can lead to serious cognitive damage, and timely interventions are needed in order to prevent them. In patients of low socioeconomic status, as is common in Latin America, it can be hard to identify children at risk. Here, we briefly introduce the problem by reviewing the scarce epidemiological data from Latin America regarding the onset of mental disorders, and discussing the difficulties associated with early diagnosis. Then we present computational psychiatry, a new field to which we and other Latin American researchers have contributed methods particularly relevant for the quantitative investigation of psychopathologies manifested during childhood. We focus on new technologies that help to identify mental disease and provide prodromal evaluation, so as to promote early differential diagnosis and intervention. To conclude, we discuss the application of these methods to clinical and educational practice. A comprehensive and quantitative characterization of verbal behavior in children, from hospitals and laboratories to homes and schools, may lead to more effective pedagogical and medical intervention
Resumo:
This thesis investigated the risk of accidental release of hydrocarbons during transportation and storage. Transportation of hydrocarbons from an offshore platform to processing units through subsea pipelines involves risk of release due to pipeline leakage resulting from corrosion, plastic deformation caused by seabed shakedown or damaged by contact with drifting iceberg. The environmental impacts of hydrocarbon dispersion can be severe. Overall safety and economic concerns of pipeline leakage at subsea environment are immense. A large leak can be detected by employing conventional technology such as, radar, intelligent pigging or chemical tracer but in a remote location like subsea or arctic, a small chronic leak may be undetected for a period of time. In case of storage, an accidental release of hydrocarbon from the storage tank could lead pool fire; further it could escalate to domino effects. This chain of accidents may lead to extremely severe consequences. Analyzing past accident scenarios it is observed that more than half of the industrial domino accidents involved fire as a primary event, and some other factors for instance, wind speed and direction, fuel type and engulfment of the compound. In this thesis, a computational fluid dynamics (CFD) approach is taken to model the subsea pipeline leak and the pool fire from a storage tank. A commercial software package ANSYS FLUENT Workbench 15 is used to model the subsea pipeline leakage. The CFD simulation results of four different types of fluids showed that the static pressure and pressure gradient along the axial length of the pipeline have a sharp signature variation near the leak orifice at steady state condition. Transient simulation is performed to obtain the acoustic signature of the pipe near leak orifice. The power spectral density (PSD) of acoustic signal is strong near the leak orifice and it dissipates as the distance and orientation from the leak orifice increase. The high-pressure fluid flow generates more noise than the low-pressure fluid flow. In order to model the pool fire from the storage tank, ANSYS CFX Workbench 14 is used. The CFD results show that the wind speed has significant contribution on the behavior of pool fire and its domino effects. The radiation contours are also obtained from CFD post processing, which can be applied for risk analysis. The outcome of this study will be helpful for better understanding of the domino effects of pool fire in complex geometrical settings of process industries. The attempt to reduce and prevent risks is discussed based on the results obtained from the numerical simulations of the numerical models.
Resumo:
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Resumo:
Esta tesis trata sobre aproximaciones de espacios métricos compactos. La aproximación y reconstrucción de espacios topológicos mediante otros más sencillos es un tema antigüo en topología geométrica. La idea es construir un espacio muy sencillo lo más parecido posible al espacio original. Como es muy difícil (o incluso no tiene sentido) intentar obtener una copia homeomorfa, el objetivo será encontrar un espacio que preserve algunas propriedades topológicas (algebraicas o no) como compacidad, conexión, axiomas de separación, tipo de homotopía, grupos de homotopía y homología, etc. Los primeros candidatos como espacios sencillos con propiedades del espacio original son los poliedros. Ver el artículo [45] para los resultados principales. En el germen de esta idea, destacamos los estudios de Alexandroff en los años 20, relacionando la dimensión del compacto métrico con la dimensión de ciertos poliedros a través de aplicaciones con imágenes o preimágenes controladas (en términos de distancias). En un contexto más moderno, la idea de aproximación puede ser realizada construyendo un complejo simplicial basado en el espacio original, como el complejo de Vietoris-Rips o el complejo de Cech y comparar su realización con él. En este sentido, tenemos el clásico lema del nervio [12, 21] el cual establece que para un recubrimiento por abiertos “suficientemente bueno" del espacio (es decir, un recubrimiento con miembros e intersecciones contractibles o vacías), el nervio del recubrimiento tiene el tipo de homotopía del espacio original. El problema es encontrar estos recubrimientos (si es que existen). Para variedades Riemannianas, existen algunos resultados en este sentido, utilizando los complejos de Vietoris-Rips. Hausmann demostró [35] que la realización del complejo de Vietoris-Rips de la variedad, para valores suficientemente bajos del parámetro, tiene el tipo de homotopía de dicha variedad. En [40], Latschev demostró una conjetura establecida por Hausmann: El tipo de homotopía de la variedad se puede recuperar utilizando un conjunto finito de puntos (suficientemente denso) para el complejo de Vietoris-Rips. Los resultados de Petersen [58], comparando la distancia Gromov-Hausdorff de los compactos métricos con su tipo de homotopía, son también interesantes. Aquí, los poliedros salen a relucir en las demostraciones, no en los resultados...
Resumo:
The research reported in this article is based on the Ph.D. project of Dr. RK, which was funded by the Scottish Informatics and Computer Science Alliance (SICSA). KvD acknowledges support from the EPSRC under the RefNet grant (EP/J019615/1).
Resumo:
Acknowledgments We thank Sally Rowland for helpful comments on the manuscript. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Resumo:
Human activities represent a significant burden on the global water cycle, with large and increasing demands placed on limited water resources by manufacturing, energy production and domestic water use. In addition to changing the quantity of available water resources, human activities lead to changes in water quality by introducing a large and often poorly-characterized array of chemical pollutants, which may negatively impact biodiversity in aquatic ecosystems, leading to impairment of valuable ecosystem functions and services. Domestic and industrial wastewaters represent a significant source of pollution to the aquatic environment due to inadequate or incomplete removal of chemicals introduced into waters by human activities. Currently, incomplete chemical characterization of treated wastewaters limits comprehensive risk assessment of this ubiquitous impact to water. In particular, a significant fraction of the organic chemical composition of treated industrial and domestic wastewaters remains uncharacterized at the molecular level. Efforts aimed at reducing the impacts of water pollution on aquatic ecosystems critically require knowledge of the composition of wastewaters to develop interventions capable of protecting our precious natural water resources.
The goal of this dissertation was to develop a robust, extensible and high-throughput framework for the comprehensive characterization of organic micropollutants in wastewaters by high-resolution accurate-mass mass spectrometry. High-resolution mass spectrometry provides the most powerful analytical technique available for assessing the occurrence and fate of organic pollutants in the water cycle. However, significant limitations in data processing, analysis and interpretation have limited this technique in achieving comprehensive characterization of organic pollutants occurring in natural and built environments. My work aimed to address these challenges by development of automated workflows for the structural characterization of organic pollutants in wastewater and wastewater impacted environments by high-resolution mass spectrometry, and to apply these methods in combination with novel data handling routines to conduct detailed fate studies of wastewater-derived organic micropollutants in the aquatic environment.
In Chapter 2, chemoinformatic tools were implemented along with novel non-targeted mass spectrometric analytical methods to characterize, map, and explore an environmentally-relevant “chemical space” in municipal wastewater. This was accomplished by characterizing the molecular composition of known wastewater-derived organic pollutants and substances that are prioritized as potential wastewater contaminants, using these databases to evaluate the pollutant-likeness of structures postulated for unknown organic compounds that I detected in wastewater extracts using high-resolution mass spectrometry approaches. Results showed that application of multiple computational mass spectrometric tools to structural elucidation of unknown organic pollutants arising in wastewaters improved the efficiency and veracity of screening approaches based on high-resolution mass spectrometry. Furthermore, structural similarity searching was essential for prioritizing substances sharing structural features with known organic pollutants or industrial and consumer chemicals that could enter the environment through use or disposal.
I then applied this comprehensive methodological and computational non-targeted analysis workflow to micropollutant fate analysis in domestic wastewaters (Chapter 3), surface waters impacted by water reuse activities (Chapter 4) and effluents of wastewater treatment facilities receiving wastewater from oil and gas extraction activities (Chapter 5). In Chapter 3, I showed that application of chemometric tools aided in the prioritization of non-targeted compounds arising at various stages of conventional wastewater treatment by partitioning high dimensional data into rational chemical categories based on knowledge of organic chemical fate processes, resulting in the classification of organic micropollutants based on their occurrence and/or removal during treatment. Similarly, in Chapter 4, high-resolution sampling and broad-spectrum targeted and non-targeted chemical analysis were applied to assess the occurrence and fate of organic micropollutants in a water reuse application, wherein reclaimed wastewater was applied for irrigation of turf grass. Results showed that organic micropollutant composition of surface waters receiving runoff from wastewater irrigated areas appeared to be minimally impacted by wastewater-derived organic micropollutants. Finally, Chapter 5 presents results of the comprehensive organic chemical composition of oil and gas wastewaters treated for surface water discharge. Concurrent analysis of effluent samples by complementary, broad-spectrum analytical techniques, revealed that low-levels of hydrophobic organic contaminants, but elevated concentrations of polymeric surfactants, which may effect the fate and analysis of contaminants of concern in oil and gas wastewaters.
Taken together, my work represents significant progress in the characterization of polar organic chemical pollutants associated with wastewater-impacted environments by high-resolution mass spectrometry. Application of these comprehensive methods to examine micropollutant fate processes in wastewater treatment systems, water reuse environments, and water applications in oil/gas exploration yielded new insights into the factors that influence transport, transformation, and persistence of organic micropollutants in these systems across an unprecedented breadth of chemical space.
Resumo:
In the last two decades, the field of homogeneous gold catalysis has been
extremely active, growing at a rapid pace. Another rapidly-growing field—that of
computational chemistry—has often been applied to the investigation of various gold-
catalyzed reaction mechanisms. Unfortunately, a number of recent mechanistic studies
have utilized computational methods that have been shown to be inappropriate and
inaccurate in their description of gold chemistry. This work presents an overview of
available computational methods with a focus on the approximations and limitations
inherent in each, and offers a review of experimentally-characterized gold(I) complexes
and proposed mechanisms as compared with their computationally-modeled
counterparts. No aim is made to identify a “recommended” computational method for
investigations of gold catalysis; rather, discrepancies between experimentally and
computationally obtained values are highlighted, and the systematic errors between
different computational methods are discussed.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.