975 resultados para Matrix analytic methods.
Resumo:
Southeast Texas, including Houston, has a large presence of industrial facilities and has been documented to have poorer air quality and significantly higher cancer rates than the remainder of Texas. Given citizens’ concerns in this 4th largest city in the U.S., Mayor Bill White recently partnered with the UT School of Public Health to determine methods to evaluate the health risks of hazardous air pollutants (HAPs). Sexton et al. (2007) published a report that strongly encouraged analytic studies linking these pollutants with health outcomes. In response, we set out to complete the following aims: 1. determine the optimal exposure assessment strategy to assess the association between childhood cancer rates and increased ambient levels of benzene and 1,3-butadiene (in an ecologic setting) and 2. evaluate whether census tracts with the highest levels of benzene or 1,3-butadiene have higher incidence of childhood lymphohematopoietic cancer compared with census tracts with the lowest levels of benzene or 1,3-butadiene, using Poisson regression. The first aim was achieved by evaluating the usefulness of four data sources: geographic information systems (GIS) to identify proximity to point sources of industrial air pollution, industrial emission data from the U.S. EPA’s Toxic Release Inventory (TRI), routine monitoring data from the U.S. EPA Air Quality System (AQS) from 1999-2000 and modeled ambient air levels from the U.S. EPA’s 1999 National Air Toxic Assessment Project (NATA) ASPEN model. Further, once these four data sources were evaluated, we narrowed them down to two: the routine monitoring data from the AQS for the years 1998-2000 and the 1999 U.S. EPA NATA ASPEN modeled data. We applied kriging (spatial interpolation) methodology to the monitoring data and compared the kriged values to the ASPEN modeled data. Our results indicated poor agreement between the two methods. Relative to the U.S. EPA ASPEN modeled estimates, relying on kriging to classify census tracts into exposure groups would have caused a great deal of misclassification. To address the second aim, we additionally obtained childhood lymphohematopoietic cancer data for 1995-2004 from the Texas Cancer Registry. The U.S. EPA ASPEN modeled data were used to estimate ambient levels of benzene and 1,3-butadiene in separate Poisson regression analyses. All data were analyzed at the census tract level. We found that census tracts with the highest benzene levels had elevated rates of all leukemia (rate ratio (RR) = 1.37; 95% confidence interval (CI), 1.05-1.78). Among census tracts with the highest 1,3-butadiene levels, we observed RRs of 1.40 (95% CI, 1.07-1.81) for all leukemia. We detected no associations between benzene or 1,3-butadiene levels and childhood lymphoma incidence. This study is the first to examine this association in Harris and surrounding counties in Texas and is among the first to correlate monitored levels of HAPs with childhood lymphohematopoietic cancer incidence, evaluating several analytic methods in an effort to determine the most appropriate approach to test this association. Despite recognized weakness of ecologic analyses, our analysis suggests an association between childhood leukemia and hazardous air pollution.^
Resumo:
Para la cuantificación de nitratos hay numerosas técnicas y no existe entre los analistas unanimidad en la selección de la más adecuada. Por tal motivo, se compararon cuatro métodos para la determinación de nitratos en muestras vegetales con el fin de evaluar la correlación entre los mismos y establecer pautas para su utilización. Se utilizaron 690 muestras de lechuga (Lactuca sativa L. var. capitata), pertenecientes a los tipos arrepollado y mantecoso, recolectadas a lo largo de un año en el Mercado Cooperativo de Guaymallén (Mendoza, Argentina). Según los tenores de nitratos encontrados en la población estudiada se efectuó un sub-muestreo aleatorio estratificado proporcional para lograr un número de muestras que representaran la variabilidad del total de la población. Se utilizaron cuatro métodos para la determinación de nitratos: 1. destilación por arrastre con vapor, considerado como método de referencia 2. colorimetría por nitración con ácido salicílico 3. colorimetría modificada 4. potenciometría con electrodo selectivo Se probaron diferentes modelos de regresión entre el método de referencia y los otros tres, siendo el lineal el que mejor se ajustó en todos los casos. Los métodos estudiados tuvieron comportamiento semejante. La mayor correlación (r2 = 93 %) se observó entre la destilación por arrastre con vapor y la potenciometría; no obstante, los restantes también presentaron alta correlación. Consecuentemente, la elección del procedimiento analítico dependerá principalmente del número de muestras a analizar, del tiempo requerido por el análisis y del costo del mismo.
Resumo:
We have analyzed the performance of a PET demonstrator formed by two sectors of four monolithic detector blocks placed face-to-face. Both front-end and read-out electronics have been evaluated by means of coincidence measurements using a rotating 22Na source placed at the center of the sectors in order to emulate the behavior of a complete full ring. A continuous training method based on neural network (NN) algorithms has been carried out to determine the entrance points over the surface of the detectors. Reconstructed images from 1 MBq 22Na point source and 22Na Derenzo phantom have been obtained using both filtered back projection (FBP) analytic methods and the OSEM 3D iterative algorithm available in the STIR software package [1]. Preliminary data on image reconstruction from a 22Na point source with Ø = 0.25 mm show spatial resolutions from 1.7 to 2.1 mm FWHM in the transverse plane. The results confirm the viability of this design for the development of a full-ring brain PET scanner compatible with magnetic resonance imaging for human studies.
Resumo:
Whole-genome duplication approximately 108 years ago was proposed as an explanation for the many duplicated chromosomal regions in Saccharomyces cerevisiae. Here we have used computer simulations and analytic methods to estimate some parameters describing the evolution of the yeast genome after this duplication event. Computer simulation of a model in which 8% of the original genes were retained in duplicate after genome duplication, and 70–100 reciprocal translocations occurred between chromosomes, produced arrangements of duplicated chromosomal regions very similar to the map of real duplications in yeast. An analytical method produced an independent estimate of 84 map disruptions. These results imply that many smaller duplicated chromosomal regions exist in the yeast genome in addition to the 55 originally reported. We also examined the possibility of determining the original order of chromosomal blocks in the ancestral unduplicated genome, but this cannot be done without information from one or more additional species. If the genome sequence of one other species (such as Kluyveromyces lactis) were known it should be possible to identify 150–200 paired regions covering the whole yeast genome and to reconstruct approximately two-thirds of the original order of blocks of genes in yeast. Rates of interchromosome translocation in yeast and mammals appear similar despite their very different rates of homologous recombination per kilobase.
Resumo:
Este estudo consistiu na investigação sobre a interação dos lubrificantes empregados na usinagem de metais com a matéria orgânica natural (substâncias húmicas), a sua mobilidade no solo, a degradação microbiológica e a remoção dos mesmos do solo. Realizou-se, também, um estudo sobre as mudanças nas características dos fluidos após a sua utilização. Para o processo de degradação das amostras de fluido, submetidos aos efeitos de diferentes fatores ambientais, foram utilizados quatro tratamentos : (i) microrganismos nativos, chamada amostra controle; (ii) amostra controle com matéria orgânica proveniente de turfa; (iii) amostra controle acrescida de microrganismos existentes nos efluentes de máquinas de corte; e (iv) amostra controle com adição de microrganismos de compostagem. Para a pesquisa da degradação sem o efeito dos parâmetros climáticos, foram utilizadas amostras de solo contaminadas mantendo-se em estufa, e a inoculação dos microrganismos em meios de cultura com e sem acréscimo de fonte alternativa de carbono. Como técnicas analíticas, foram utilizadas a CG-DIC e a CG-EM. Essas técnicas são indicadas tanto para estudar a composição dos fluidos quanto dos produtos de degradação microbiológica, tendo sido otimizados métodos analíticos para serem empregados no monitoramento ambiental e de estudos de degradação. As análises por IVTF e por EF também foram empregadas na identificação e quantificação dos fluidos. Observou-se uma considerável interação dos fluidos solúveis com a matéria orgânica do solo, embora tenham se apresentado com alta mobilidade para alguns constituintes, bem como um acelerado processo de degradação durante o uso. De outro modo, os fluidos insolúveis se apresentaram mais imóveis, ficando retidos na matéria orgânica do solo, entretanto, foram mais prontamente degradados no ambiente que os solúveis. A adição de matéria orgânica e de microrganismos de compostagem acelerou o processo de degradação para todos os fluidos de corte investigados.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Objective: The tripartite model of anxiety and depression has been proposed as a representation of the structure of anxiety and depression symptoms. The Mood and Anxiety Symptom Questionnaire (MASQ) has been put forwards as a valid measure of the tripartite model of anxiety and depression symptoms. This research set out to examine the factor structure of anxiety and depression symptoms in a clinical sample to assess the MASQ's validity for use in this population. MethodsThe present study uses confirmatory factor analytic methods to examine the psychometric properties of the MASQ in 470 outpatients with anxiety and mood disorder. Results: The results showed that none of the previously reported two-factor, three-factor or five-factor models adequately fit the data, irrespective of whether items or subscales were used as the unit of analysis. Conclusions: It was concluded that the factor structure of the MASQ in a mixed anxiety/depression clinical sample does not support a structure consistent with the tripartite model. This suggests that researchers using the MASQ with anxious/depressed individuals should be mindful of the instrument's psychometric limitations.
Resumo:
How can empirical evidence of adverse effects from exposure to noxious agents, which is often incomplete and uncertain, be used most appropriately to protect human health? We examine several important questions on the best uses of empirical evidence in regulatory risk management decision-making raised by the US Environmental Protection Agency (EPA)'s science-policy concerning uncertainty and variability in human health risk assessment. In our view, the US EPA (and other agencies that have adopted similar views of risk management) can often improve decision-making by decreasing reliance on default values and assumptions, particularly when causation is uncertain. This can be achieved by more fully exploiting decision-theoretic methods and criteria that explicitly account for uncertain, possibly conflicting scientific beliefs and that can be fully studied by advocates and adversaries of a policy choice, in administrative decision-making involving risk assessment. The substitution of decision-theoretic frameworks for default assumption-driven policies also allows stakeholder attitudes toward risk to be incorporated into policy debates, so that the public and risk managers can more explicitly identify the roles of risk-aversion or other attitudes toward risk and uncertainty in policy recommendations. Decision theory provides a sound scientific way explicitly to account for new knowledge and its effects on eventual policy choices. Although these improvements can complicate regulatory analyses, simplifying default assumptions can create substantial costs to society and can prematurely cut off consideration of new scientific insights (e.g., possible beneficial health effects from exposure to sufficiently low 'hormetic' doses of some agents). In many cases, the administrative burden of applying decision-analytic methods is likely to be more than offset by improved effectiveness of regulations in achieving desired goals. Because many foreign jurisdictions adopt US EPA reasoning and methods of risk analysis, it may be especially valuable to incorporate decision-theoretic principles that transcend local differences among jurisdictions.
Resumo:
Quantitative genetics provides a powerful framework for studying phenotypic evolution and the evolution of adaptive genetic variation. Central to the approach is G, the matrix of additive genetic variances and covariances. G summarizes the genetic basis of the traits and can be used to predict the phenotypic response to multivariate selection or to drift. Recent analytical and computational advances have improved both the power and the accessibility of the necessary multivariate statistics. It is now possible to study the relationships between G and other evolutionary parameters, such as those describing the mutational input, the shape and orientation of the adaptive landscape, and the phenotypic divergence among populations. At the same time, we are moving towards a greater understanding of how the genetic variation summarized by G evolves. Computer simulations of the evolution of G, innovations in matrix comparison methods, and rapid development of powerful molecular genetic tools have all opened the way for dissecting the interaction between allelic variation and evolutionary process. Here I discuss some current uses of G, problems with the application of these approaches, and identify avenues for future research.
Resumo:
This paper examines learning to collaborate in the context of industrial supply relationships. Evidence of collaboration, and individual and organizational learning, from an in-depth case study of a large organization and its relations with two key suppliers is discussed. Analytic methods developed to elicit such evidence and provide insights into learning processes and outcomes are presented. It is argued that it is possible for an organization and individuals to learn to develop resilient collaborative relationships, but this requires a more thorough consideration and understanding of issues such as trust, commitment and teamwork than has been typical to date. Suggestions for future practice and research are presented.
Resumo:
This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults' self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18-59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 11/2-18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies. © 2014 Springer Science+Business Media New York.
Resumo:
Nonbelieved memories (NBMs) highlight the independence between metamemorial judgments that contribute to the experience of remembering. Initial definitions of NBMs portrayed them as involving the withdrawal of autobiographical belief despite sustained recollection. While people rate belief for their NBMs as weaker than recollection, the average difference is too small to support the idea that belief is completely withdrawn in all cases. Furthermore, ratings vary considerably across NBMs. In two studies, we reanalyzed reports from prior studies to examine whether NBM reports reflect a single category or multiple sub-categories using cluster analytic methods. In Study 1, we identified three sub-types of NBMs. In Study 2 we incorporated the concept of belief in accuracy, and found that two of the clusters from Study 1 split into two clusters apiece. Higher ratings of recollection than belief in occurrence characterized all clusters, which were differentiated by the degree of difference between these variables. In both studies the clusters were differentiated by a number of memory characteristic ratings and by reasons reported as leading to the alteration of belief. Implications for understanding the remembering of past events and predicting the creation of NBMs are discussed.
Resumo:
In the article - Menu Analysis: Review and Evaluation - by Lendal H. Kotschevar, Distinguished Professor School of Hospitality Management, Florida International University, Kotschevar’s initial statement reads: “Various methods are used to evaluate menus. Some have quite different approaches and give different information. Even those using quite similar methods vary in the information they give. The author attempts to describe the most frequently used methods and to indicate their value. A correlation calculation is made to see how well certain of these methods agree in the information they give.” There is more than one way to look at the word menu. The culinary selections decided upon by the head chef or owner of a restaurant, which ultimately define the type of restaurant is one way. The physical outline of the food, which a patron actually holds in his or her hand, is another. These descriptions are most common to the word, menu. The author primarily concentrates on the latter description, and uses the act of counting the number of items sold on a menu to measure the popularity of any particular item. This, along with a formula, allows Kotschevar to arrive at a specific value per item. Menu analysis would appear a difficult subject to broach. How does a person approach a menu analysis, how do you qualify and quantify a menu; it seems such a subjective exercise. The author offers methods and outlines on approaching menu analysis from empirical perspectives. “Menus are often examined visually through the evaluation of various factors. It is a subjective method but has the advantage of allowing scrutiny of a wide range of factors which other methods do not,” says Distinguished Professor, Kotschevar. “The method is also highly flexible. Factors can be given a score value and scores summed to give a total for a menu. This allows comparison between menus. If the one making the evaluations knows menu values, it is a good method of judgment,” he further offers. The author wants you to know that assigning values is fundamental to a pragmatic menu analysis; it is how the reviewer keeps score, so to speak. Value merit provides reliable criteria from which to gauge a particular menu item. In the final analysis, menu evaluation provides the mechanism for either keeping or rejecting selected items on a menu. Kotschevar provides at least three different matrix evaluation methods; they are defined as the Miller method, the Smith and Kasavana method, and the Pavesic method. He offers illustrated examples of each via a table format. These are helpful tools since trying to explain the theories behind the tables would be difficult at best. Kotschevar also references examples of analysis methods which aren’t matrix based. The Hayes and Huffman - Goal Value Analysis - is one such method. The author sees no one method better than another, and suggests that combining two or more of the methods to be a benefit.
Resumo:
How do infants learn word meanings? Research has established the impact of both parent and child behaviors on vocabulary development, however the processes and mechanisms underlying these relationships are still not fully understood. Much existing literature focuses on direct paths to word learning, demonstrating that parent speech and child gesture use are powerful predictors of later vocabulary. However, an additional body of research indicates that these relationships don’t always replicate, particularly when assessed in different populations, contexts, or developmental periods.
The current study examines the relationships between infant gesture, parent speech, and infant vocabulary over the course of the second year (10-22 months of age). Through the use of detailed coding of dyadic mother-child play interactions and a combination of quantitative and qualitative data analytic methods, the process of communicative development was explored. Findings reveal non-linear patterns of growth in both parent speech content and child gesture use. Analyses of contingency in dyadic interactions reveal that children are active contributors to communicative engagement through their use of gestures, shaping the type of input they receive from parents, which in turn influences child vocabulary acquisition. Recommendations for future studies and the use of nuanced methodologies to assess changes in the dynamic system of dyadic communication are discussed.
Resumo:
Queueing theory is the mathematical study of ‘queue’ or ‘waiting lines’ where an item from inventory is provided to the customer on completion of service. A typical queueing system consists of a queue and a server. Customers arrive in the system from outside and join the queue in a certain way. The server picks up customers and serves them according to certain service discipline. Customers leave the system immediately after their service is completed. For queueing systems, queue length, waiting time and busy period are of primary interest to applications. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, mean queue length, traffic intensity, the expected number waiting or receiving service, mean busy period, distribution of queue length, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.