925 resultados para text analytic approaches
Resumo:
For several centuries, Japanese scholars have argued that their nation’s culture—including its language, religion and ways of thinking—is somehow unique. The darker side of this rhetoric, sometimes known by the English term “Japanism” (nihon-jinron), played no small role in the nationalist fervor of the late-nineteenth and early twentieth centuries. While much of the so-called “ideology of Japanese uniqueness” can be dismissed, in terms of the Japanese approach to “religion,” there may be something to it. This paper highlights some distinctive—if not entirely unique—features of the way religion has been categorized and understood in Japanese tradition, contrasting these with Western (i.e., Abrahamic), and to a lesser extent Indian and Chinese understandings. Particular attention is given to the priority of praxis over belief in the Japanese religious context. Des siècles durant, des chercheurs japonais ont soutenu que leur culture – soit leur langue, leur religion et leurs façons de penser – était en quelque sorte unique. Or, sous son jour le plus sombre, cette rhétorique, parfois désignée du terme de « japonisme » (nihon-jinron), ne fut pas sans jouer un rôle déterminant dans la montée de la ferveur nationaliste à la fin du XIXe siècle, ainsi qu’au début du XXe siècle. Bien que l’on puisse discréditer pour l’essentiel cette soi-disant « idéologie de l’unicité japonaise », la conception nippone de la « religion » constitue, quant à elle, un objet d’analyse des plus utiles et pertinents. Cet article met en évidence quelques caractéristiques, sinon uniques du moins distinctives, de la manière dont la religion a été élaborée et comprise au sein de la tradition japonaise, pour ensuite les constrater avec les conceptions occidentale (abrahamique) et, dans une moindre mesure, indienne et chinoise. Une attention toute particulière est ici accordée à la praxis plutôt qu’à la croyance dans le contexte religieux japonais.
Resumo:
Pesiqta Rabbati is a unique homiletic midrash that follows the liturgical calendar in its presentation of homilies for festivals and special Sabbaths. This article attempts to utilize Pesiqta Rabbati in order to present a global theory of the literary production of rabbinic/homiletic literature. In respect to Pesiqta Rabbati it explores such areas as dating, textual witnesses, integrative apocalyptic meta-narrative, describing and mapping the structure of the text, internal and external constraints that impacted upon the text, text linguistic analysis, form-analysis: problems in the texts and linguistic gap-filling, transmission of text, strict formalization of a homiletic unit, deconstructing and reconstructing homiletic midrashim based upon form-analytic units of the homily, Neusner’s documentary hypothesis, surface structures of the homiletic unit, and textual variants. The suggested methodology may assist scholars in their production of editions of midrashic works by eliminating superfluous material and in their decoding and defining of ancient texts.
Resumo:
This article reviews the process of intervention research, with particular emphasis on interventions with and for older adults. First, we consider the context of intervention research: What are the resources and constraints that shape the process of intervention and research? Second, we outline a taxonomy of interventions, reflecting different combinations of level and timing of interventions. Third, we consider the types of research designs that are most appropriate in the different contexts of applied research.
Resumo:
Nurse's aides are the primary caregivers in nursing homes, a major receiving site for elders with behavioral and psychiatric problems. We describe the development, psychometric properties, and utility of a brief instrument designed to assess aides' knowledge of three specific mental health problems (depression, agitation, and disorientation) and behavioral approaches to them. The instrument was administered to 191 nurse's aides and 21 clinicians with training in behavioral management and experience with older residents. The nurse's aides averaged 11 of 17 correct answers, and the clinicians averaged 15 of 17 correct answers. Implications for staff training and consultation activities in nursing homes are discussed.
Resumo:
With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.
Resumo:
Background: The recent development of semi-automated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. Methods: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. Results: We found that graphical representations can reveal substantial non-biological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. Conclusions: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
The longitudinal dimension of schizophrenia and related severe mental illness is a key component of theoretical models of recovery. However, empirical longitudinal investigations have been underrepresented in the psychopathology of schizophrenia. Similarly, traditional approaches to longitudinal analysis of psychopathological data have had serious limitations. The utilization of modern longitudinal methods is necessary to capture the complexity of biopsychosocial models of treatment and recovery in schizophrenia. The present paper summarizes empirical data from traditional longitudinal research investigating recovery in symptoms, neurocognition, and social functioning. Studies conducted under treatment as usual conditions are compared to psychosocial intervention studies and potential treatment mechanisms of psychosocial interventions are discussed. Investigations of rehabilitation for schizophrenia using the longitudinal analytic strategies of growth curve and time series analysis are demonstrated. The respective advantages and disadvantages of these modern methods are highlighted. Their potential use for future research of treatment effects and recovery in schizophrenia is also discussed.
Resumo:
The numerical solution of the incompressible Navier-Stokes equations offers an alternative to experimental analysis of fluid-structure interaction (FSI). We would save a lot of time and effort and help cut back on costs, if we are able to accurately model systems by these numerical solutions. These advantages are even more obvious when considering huge structures like bridges, high rise buildings or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the Kinematic Laplacian Equation (KLE) to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ordinary differential equations (ODE) time integration schemes, allowing us to tackle each problem as a separate module. The current algortihm for the KLE uses an unstructured quadrilateral mesh, formed by dividing each triangle of an unstructured triangular mesh into three quadrilaterals for spatial discretization. This research deals with determining a suitable measure of mesh quality based on the physics of the problems being tackled. This is followed by exploring methods to improve the quality of quadrilateral elements obtained from the triangles and thereby improving the overall mesh quality. A series of numerical experiments were designed and conducted for this purpose and the results obtained were tested on different geometries with varying degrees of mesh density.
Resumo:
Volcanic ash clouds can be fed by an upward-directed eruption column (Plinian column) or by elutriation from extensive pyroclastic-flows (coignimbrite cloud). For large-scale eruptions, there is considerable uncertainty about which mechanism is dominant. Here we analyze in a novel way a comprehensive grainsize database for pyroclastic deposits. We demonstrate that the Mount Pinatubo climactic eruption deposits were substantially derived from coignimbrite clouds, and not only by a Plinian cloud as generally thought. Coignimbrite ash-fall deposits are much richer in breathable <10 m ash (5–25 wt%) than pure Plinian ash at most distances from the source volcano. We also show that coignimbrite ash clouds, as at Pinatubo, are expected to be more water rich than Plinian clouds, leading to removal of more HCl prior to stratospheric injection, thereby reducing their atmospheric impact.
Resumo:
After teaching regular education secondary mathematics for seven years, I accepted a position in an alternative education high school. Over the next four years, the State of Michigan adopted new graduation requirements phasing in a mandate for all students to complete Geometry and Algebra 2 courses. Since many of my students were already struggling in Algebra 1, getting them through Geometry and Algebra 2 seemed like a daunting task. To better instruct my students, I wanted to know how other teachers in similar situations were addressing the new High School Content Expectations (HSCEs) in upper level mathematics. This study examines how thoroughly alternative education teachers in Michigan are addressing the HSCEs in their courses, what approaches they have found most effective, and what issues are preventing teachers and schools from successfully implementing the HSCEs. Twenty-six alternative high school educators completed an online survey that included a variety of questions regarding school characteristics, curriculum alignment, implementation approaches and issues. Follow-up phone interviews were conducted with four of these participants. The survey responses were used to categorize schools as successful, unsuccessful, and neutral schools in terms of meeting the HSCEs. Responses from schools in each category were compared to identify common approaches and issues among them and to identify significant differences between school groups. Data analysis showed that successful schools taught more of the HSCEs through a variety of instructional approaches, with an emphasis on varying the ways students learned the material. Individualized instruction was frequently mentioned by successful schools and was strikingly absent from unsuccessful school responses. The main obstacle to successful implementation of the HSCEs identified in the study was gaps in student knowledge. This caused pace of instruction to also be a significant issue. School representatives were fairly united against the belief that the Algebra 2 graduation requirement was appropriate for all alternative education students. Possible implications of these findings are discussed.
Resumo:
The purpose of this study is to explore a Kalman Filter approach to estimating swing of crane-suspended loads. Measuring real-time swing is needed to implement swing damping control strategies where crane joints are used to remove energy from a swinging load. The typical solution to measuring swing uses an inertial sensor attached to the hook block. Measured hook block twist is used to resolve the other two sensed body rates into tangential and radial swing. Uncertainty in the twist measurement leads to inaccurate tangential and radial swing calculations and ineffective swing damping. A typical mitigation approach is to bandpass the inertial sensor readings to remove low frequency drift and high frequency noise. The center frequency of the bandpass filter is usually designed to track the load length and the pass band width set to trade off performance with damping loop gain. The Kalman Filter approach developed here allows all swing motions (radial, tangential and twist) to be measured without the use of a bandpass filter. This provides an alternate solution for swing damping control implementation. After developing a Kalman Filter solution for a two-dimensional swing scenario, the three-dimensional system is considered where simplifying assumptions, suggested by the two-dimensional study, are exploited. One of the interesting aspects of the three-dimensional study is the hook block twist model. Unlike the mass-independence of a pendulum's natural frequency, the twist natural frequency depends both on the pendulum length and the load’s mass distribution. The linear Kalman Filter is applied to experimental data demonstrating the ability to extract the individual swing components for complex motions. It should be noted that the three-dimensional simplifying assumptions preclude the ability to measure two "secondary" hook block rotations. The ability to segregate these motions from the primary swing degrees of freedom was illustrated in the two-dimensional study and could be included into the three-dimensional solution if they were found to be important for a particular application.
Resumo:
Mining activity in Butte, Montana has taken place, or continues to take place, within the urban residence of Butte itself. This has led to urban areas with high concentrations of toxic metals such as arsenic, lead, copper, zinc, mercury and cadmium. Advances in protein study and gene sequencing has opened the possibility of finding molecular biomarkers whose presence, absence or morphological changes could indicate disease processes in populations exposed to environmental toxins. While in principle, biomarkers can be any chemicals or metabolites, as well as proteins and genes that are indicative of exposure to xenobiotics, this study seeks to identify changes in cellular pathways that suggest chronic (or acute) exposure to low-levels of metals associated with historical mining activities on the Butte Hill that could cause oxidative stress or other stress to the cell.