863 resultados para critical path methods
Resumo:
The recent advent of new technologies has led to huge amounts of genomic data. With these data come new opportunities to understand biological cellular processes underlying hidden regulation mechanisms and to identify disease related biomarkers for informative diagnostics. However, extracting biological insights from the immense amounts of genomic data is a challenging task. Therefore, effective and efficient computational techniques are needed to analyze and interpret genomic data. In this thesis, novel computational methods are proposed to address such challenges: a Bayesian mixture model, an extended Bayesian mixture model, and an Eigen-brain approach. The Bayesian mixture framework involves integration of the Bayesian network and the Gaussian mixture model. Based on the proposed framework and its conjunction with K-means clustering and principal component analysis (PCA), biological insights are derived such as context specific/dependent relationships and nested structures within microarray where biological replicates are encapsulated. The Bayesian mixture framework is then extended to explore posterior distributions of network space by incorporating a Markov chain Monte Carlo (MCMC) model. The extended Bayesian mixture model summarizes the sampled network structures by extracting biologically meaningful features. Finally, an Eigen-brain approach is proposed to analyze in situ hybridization data for the identification of the cell-type specific genes, which can be useful for informative blood diagnostics. Computational results with region-based clustering reveals the critical evidence for the consistency with brain anatomical structure.
Resumo:
A critical component of teacher education is the field experience during which candidates practice under the supervision of experienced teachers. Programs use the InTASC Standards to define the requisite knowledge, skills, and dispositions for teaching. Practicing teachers are familiar with the concepts of knowledge and skills, but they are less familiar with dispositions. Practicing teachers who mentor prospective teachers are underrepresented in the literature, but they are critical to teacher preparation. The research goals were to describe the self-identified dispositions of cooperating teachers, identify what cooperating teachers consider their role in preparing prospective teachers, and explain challenges that cooperating teachers face. Using a mixed methods design, I conducted a quantitative survey followed by a qualitative case study. When I compared survey and case study data, cooperating teachers report possessing InTASC critical dispositions described in Standard 2: Learning Differences, Standard 3: Learning Environments, and Standard 9: Professional Learning and Ethical Practice, but not Standard 6: Assessment and Standard 10: Leadership and Collaboration. Cooperating teachers assume the roles of modeler, mentor and advisor, and informal evaluator. They explain student teachers often lack skills and dispositions to assume full teaching responsibilities and recommend that universities better prepare candidates for classrooms. Cooperating teachers felt university evaluations were not relevant to teaching reality. I recommend modifying field experiences to increase the quantity and duration of classroom placements. I suggest further research to detail cooperating teacher dispositions, compare cooperating teachers who work with different universities, and determine if cooperating teacher dispositions influence student teacher dispositions.
Resumo:
Passive sampling devices (PS) are widely used for pollutant monitoring in water, but estimation of measurement uncertainties by PS has seldom been undertaken. The aim of this work was to identify key parameters governing PS measurements of metals and their dispersion. We report the results of an in situ intercomparison exercise on diffusive gradient in thin films (DGT) in surface waters. Interlaboratory uncertainties of time-weighted average (TWA) concentrations were satisfactory (from 28% to 112%) given the number of participating laboratories (10) and ultra-trace metal concentrations involved. Data dispersion of TWA concentrations was mainly explained by uncertainties generated during DGT handling and analytical procedure steps. We highlight that DGT handling is critical for metals such as Cd, Cr and Zn, implying that DGT assembly/dismantling should be performed in very clean conditions. Using a unique dataset, we demonstrated that DGT markedly lowered the LOQ in comparison to spot sampling and stressed the need for accurate data calculation.
Resumo:
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.
Resumo:
This thesis examines digital technologies policies designed for Australian schools and the ways they are understood and interpreted by students, school staff, teachers, principals and policy writers. This study explores the ways these research participant groups interpret and understand the ‘ethical dimension’ of schools’ digital technologies policies for teaching and learning. In this thesis the ethical dimension is considered to be a dynamic concept which encompasses various elements including; decisions, actions, values, issues, debates, education, discourses, and notions of right and wrong, in relation to ethics and uses of digital technologies in schools. In this study policy is taken to mean not only written texts but discursive processes, policy documents including national declarations, strategic plans and ‘acceptable use’ policies to guide the use of digital technologies in schools. The research is situated in the context of changes that have occurred in Australia and internationally over the last decade that have seen a greater focus on the access to and use of digital technologies in schools. In Australian school education, the attention placed on digital technologies in schools has seen the release of policies at the national, state, territory, education office and school levels, to guide their use. Prominent among these policies has been the Digital Education Revolution policy, launched in 2007 and concluded in 2013. This research aims to answers the question: What does an investigation reveal about understandings of the ethical dimension of digital technologies policies and their implementation in school education? The objective of this research is to examine the ethical dimension of digital technologies policies and to interpret and understand the responses of the research participants to the issues, silences, discourses and language, which characterise this dimension. In doing so, it is intended that the research can allow the participants to have a voice that, may be different to the official discourses located in digital technologies policies. The thesis takes a critical and interpretative approach to policies and examines the role of digital technologies policies as discourse. Interpretative theory is utilised as it provides a conceptual lens from which to interpret different perspectives and the implications of these in the construction of meaning in relation to schools’ digital technologies policies. Critical theory is used in tandem with interpretative theory as it represents a conceptual basis from which to critique and question underlying assumptions and discourses that are associated with the ethical dimension of schools’ digital technologies policies. The research methods used are semi-structured interviews and policy document analysis. Policies from the national, state, territory, education office and school level were analysed and contribute to understanding the way the ethical dimension of digital technologies policies is represented as a discourse. Students, school staff, teachers, principals and policy writers participated in research interviews and their views and perspectives were canvassed in relation to the ethical use of digital technologies and the policies that are designed to regulate their use. The thesis presents an argument that the ethical dimension of schools’ digital technologies policies and use is an under-researched area, and there are gaps in understanding and knowledge in the literature which remain to be addressed. It is envisaged that the thesis can make a meaningful contribution to understand the ways in which schools’ digital technologies policies are understood in school contexts. It is also envisaged that the findings from the research can inform policy development by analysing the voices and views of those in schools. The findings of the policy analysis revealed that there is little attention given to the ethical dimension in digital technologies at the national level. A discourse of compliance and control pervades digital technologies policies from the state, education office and school levels, which reduces ethical considerations to technical, legal and regulatory requirements. The discourse is largely instrumentalist and neglects the educative dimension of digital technologies which has the capacity to engender their ethical use. The findings from the interview conversations revealed that students, school staff and teachers perceive digital technologies policies to be difficult to understand, and not relevant to their situation and needs. They also expressed a desire to have greater consultation and participation in the formation and enactment of digital technologies policies, and they believe they are marginalised from these processes in their schools. Arising from the analysis of the policies and interview conversations, an argument is presented that in the light of the prominent role played by digital technologies and their potential for enhancing all aspects of school education, more research is required to provide a more holistic and richer understanding of the policies that are constructed to control and mediate their use.
Resumo:
Lors du transport du bois de la forêt vers les usines, de nombreux événements imprévus peuvent se produire, événements qui perturbent les trajets prévus (par exemple, en raison des conditions météo, des feux de forêt, de la présence de nouveaux chargements, etc.). Lorsque de tels événements ne sont connus que durant un trajet, le camion qui accomplit ce trajet doit être détourné vers un chemin alternatif. En l’absence d’informations sur un tel chemin, le chauffeur du camion est susceptible de choisir un chemin alternatif inutilement long ou pire, qui est lui-même "fermé" suite à un événement imprévu. Il est donc essentiel de fournir aux chauffeurs des informations en temps réel, en particulier des suggestions de chemins alternatifs lorsqu’une route prévue s’avère impraticable. Les possibilités de recours en cas d’imprévus dépendent des caractéristiques de la chaîne logistique étudiée comme la présence de camions auto-chargeurs et la politique de gestion du transport. Nous présentons trois articles traitant de contextes d’application différents ainsi que des modèles et des méthodes de résolution adaptés à chacun des contextes. Dans le premier article, les chauffeurs de camion disposent de l’ensemble du plan hebdomadaire de la semaine en cours. Dans ce contexte, tous les efforts doivent être faits pour minimiser les changements apportés au plan initial. Bien que la flotte de camions soit homogène, il y a un ordre de priorité des chauffeurs. Les plus prioritaires obtiennent les volumes de travail les plus importants. Minimiser les changements dans leurs plans est également une priorité. Étant donné que les conséquences des événements imprévus sur le plan de transport sont essentiellement des annulations et/ou des retards de certains voyages, l’approche proposée traite d’abord l’annulation et le retard d’un seul voyage, puis elle est généralisée pour traiter des événements plus complexes. Dans cette ap- proche, nous essayons de re-planifier les voyages impactés durant la même semaine de telle sorte qu’une chargeuse soit libre au moment de l’arrivée du camion à la fois au site forestier et à l’usine. De cette façon, les voyages des autres camions ne seront pas mo- difiés. Cette approche fournit aux répartiteurs des plans alternatifs en quelques secondes. De meilleures solutions pourraient être obtenues si le répartiteur était autorisé à apporter plus de modifications au plan initial. Dans le second article, nous considérons un contexte où un seul voyage à la fois est communiqué aux chauffeurs. Le répartiteur attend jusqu’à ce que le chauffeur termine son voyage avant de lui révéler le prochain voyage. Ce contexte est plus souple et offre plus de possibilités de recours en cas d’imprévus. En plus, le problème hebdomadaire peut être divisé en des problèmes quotidiens, puisque la demande est quotidienne et les usines sont ouvertes pendant des périodes limitées durant la journée. Nous utilisons un modèle de programmation mathématique basé sur un réseau espace-temps pour réagir aux perturbations. Bien que ces dernières puissent avoir des effets différents sur le plan de transport initial, une caractéristique clé du modèle proposé est qu’il reste valable pour traiter tous les imprévus, quelle que soit leur nature. En effet, l’impact de ces événements est capturé dans le réseau espace-temps et dans les paramètres d’entrée plutôt que dans le modèle lui-même. Le modèle est résolu pour la journée en cours chaque fois qu’un événement imprévu est révélé. Dans le dernier article, la flotte de camions est hétérogène, comprenant des camions avec des chargeuses à bord. La configuration des routes de ces camions est différente de celle des camions réguliers, car ils ne doivent pas être synchronisés avec les chargeuses. Nous utilisons un modèle mathématique où les colonnes peuvent être facilement et naturellement interprétées comme des itinéraires de camions. Nous résolvons ce modèle en utilisant la génération de colonnes. Dans un premier temps, nous relaxons l’intégralité des variables de décision et nous considérons seulement un sous-ensemble des itinéraires réalisables. Les itinéraires avec un potentiel d’amélioration de la solution courante sont ajoutés au modèle de manière itérative. Un réseau espace-temps est utilisé à la fois pour représenter les impacts des événements imprévus et pour générer ces itinéraires. La solution obtenue est généralement fractionnaire et un algorithme de branch-and-price est utilisé pour trouver des solutions entières. Plusieurs scénarios de perturbation ont été développés pour tester l’approche proposée sur des études de cas provenant de l’industrie forestière canadienne et les résultats numériques sont présentés pour les trois contextes.
Resumo:
International audience
Resumo:
Context. Within the core accretion scenario of planetary formation, most simulations performed so far always assume the accreting envelope to have a solar composition. From the study of meteorite showers on Earth and numerical simulations, we know that planetesimals must undergo thermal ablation and disruption when crossing a protoplanetary envelope. Thus, once the protoplanet has acquired an atmosphere, not all planetesimals reach the core intact, i.e. the primordial envelope (mainly H and He) gets enriched in volatiles and silicates from the planetesimals. This change of envelope composition during the formation can have a significant effect on the final atmospheric composition and on the formation timescale of giant planets. Aims. We investigate the physical implications of considering the envelope enrichment of protoplanets due to the disruption of icy planetesimals during their way to the core. Particular focus is placed on the effect on the critical core mass for envelopes where condensation of water can occur. Methods. Internal structure models are numerically solved with the implementation of updated opacities for all ranges of metallicities and the software Chemical Equilibrium with Applications to compute the equation of state. This package computes the chemical equilibrium for an arbitrary mixture of gases and allows the condensation of some species, including water. This means that the latent heat of phase transitions is consistently incorporated in the total energy budget. Results. The critical core mass is found to decrease significantly when an enriched envelope composition is considered in the internal structure equations. A particularly strong reduction of the critical core mass is obtained for planets whose envelope metallicity is larger than Z approximate to 0.45 when the outer boundary conditions are suitable for condensation of water to occur in the top layers of the atmosphere. We show that this effect is qualitatively preserved even when the atmosphere is out of chemical equilibrium. Conclusions. Our results indicate that the effect of water condensation in the envelope of protoplanets can severely affect the critical core mass, and should be considered in future studies.
Resumo:
We develop a method based on spectral graph theory to approximate the eigenvalues and eigenfunctions of the Laplace-Beltrami operator of a compact riemannian manifold -- The method is applied to a closed hyperbolic surface of genus two -- The results obtained agree with the ones obtained by other authors by different methods, and they serve as experimental evidence supporting the conjectured fact that the generic eigenfunctions belonging to the first nonzero eigenvalue of a closed hyperbolic surface of arbitrary genus are Morse functions having the least possible total number of critical points among all Morse functions admitted by such manifolds
Resumo:
The topic of the thesis is media discourse about current state if income inequality in the US, and political ideologies as influences behind the discourse. The data consists of four opinion articles, two from CNN and two from Fox News. The purpose of the study was to examine how media represents income inequality as an issue, and if the attitudes conveyed are concerned or indifferent. Previous studies have indicated that the level of income is often seen as a personal responsibility, and such perspective can be linked with Republican ideology. In contrast, the Democrats typically express more concern about the consequences of inequality. CNN has been previously considered to have a Democratic bias, and Fox News has been considered to have Republican bias, which is one reason why these two news channels were chosen as the sources of the data. The study is a critical discourse analysis, and the methods applied were sociocognitive approach, which analyzes the social and cognitive factors affecting the discourse, and appraisal framework, which was applied to scrutinize the expressed attitudes more closely by identifyind specific linguistic features. The appraisal framework includes studying such features as affect, judgment and appreciation, which offer a more detailed analysis on the attitudes present in the articles. The sociocognitive approach, additionally, offers a way of analyzing a more broad context affecting the articles. The findings were then compared, to see if there are differences between the articles, or between the news sites with alleged bias. The findings showed that CNN, with alleged Democratic bias, had a more symphatetic attitude towards income inequality, whereas Fox News, with more Republican views, showed clearly less concern towards the issue. Moreover, the Fox News articles had such dubious claims that the underlying ideology behind the articles could be even supporting of income inequality, as it allows the rich to pursue all the wealth they can without having to give anything away. The results, thus, suggest that the political ideologies may a significant effect on media discourse, which, in turn, may have a significant effect on the attitudes of the public towards great issues that could require prompt measures.
Resumo:
Objective: We investigate the influence of caloric and protein deficit on mortality and length of hospital stay of critically ill patients. Methods: A cohort prospective study including 100 consecutive patients in a tertiary intensive care unit (ICU) receiving enteral or parenteral nutrition. The daily caloric and protein deficit were collected each day for a maximum of 30 days. Energy deficits were divided into critical caloric deficit (≥ 480 kcal/day) and non-critical caloric deficit (≤ 480 kcal/day); and in critical protein deficit (≥ 20 g/day) and non-critical protein deficit (≤ 20 g/day). The findings were correlated with hospital stay and mortality. Results: The mortality rate was 33%. Overall, the patients received 65.4% and 67.7% of the caloric and protein needs. Critical caloric deficit was found in 72% of cases and critical protein deficit in 70% of them. There was a significant correlation between length of stay and accumulated caloric deficit (R = 0.37; p < 0.001) and protein deficit (R = 0.28; p < 0.001). The survival analysis showed that mortality was greater in patients with both critical caloric (p < 0.001) and critical protein deficits (p < 0.01). The Cox regression analysis showed that critical protein deficit was associated with higher mortality (HR 0.25, 95% CI 0.07-0.93, p = 0.03). Conclusions: The incidence of caloric and protein deficit in the ICU is high. Both caloric and protein deficits increase the length of hospital stay, and protein deficit greater than 20 g/day is an independent factor for mortality in critical care unit.
Resumo:
Purpose: To evaluate the impact of three different extraction methods on yield, physicochemical properties and bioactive ingredients of Raphanus sativus seed oil. Methods: Raphanus Sativus seed oil was prepared by traditional solvent extraction (SE), super-critical carbon dioxide extraction (SCE) and sub-critical propane extraction (SPE). The yield, physicochemical properties, fatty acid composition and oxidative stability of the oil extracts were compared. The contents of tocopherol and sulforaphene in the oils were also determined. Results: The oil yield obtained by SPE, SE and SCE were 33.69, 27.17 and 24.10 %, respectively. There were no significant differences in physicochemical properties and fatty acid compositions of oils extracted by the three methods. However, SCE oil had the best oxidative stability, and highest contents of vitamin E and sulforaphene, followed by oils from SPE and SE. Conclusion: SCE is highly selective for tocopherol and sulforaphene, which could explain its high oil oxidative stability. These results suggest that of the three extraction methods, SCE is best suited for preparing medicinal radish seed oil.
Resumo:
Molecular simulation provides a powerful tool for connecting molecular-level processes to physical observables. However, the facility to make those connections relies upon the application and development of theoretical methods that permit appropriate descriptions of the systems or processes to be studied. In this thesis, we utilize molecular simulation to study and predict two phenomena with very different theoretical challenges, beginning with (1) lithium-ion transport behavior in polymers and following with (2) equilibrium isotope effects with relevance to position-specific and clumped isotope studies. In the case of ion transport in polymers, there is motivation to use molecular simulation to provide guidance in polymer electrolyte design, but the length and timescales relevant for ion diffusion in polymers preclude the use of direct molecular dynamics simulation to compute ion diffusivities in more than a handful of candidate systems. In the case of equilibrium isotope effects, the thermodynamic driving forces for isotopic fractionation are often fundamentally quantum mechanical in nature, and the high precision of experimental instruments demands correspondingly accurate theoretical approaches. Herein, we describe respectively coarse-graining and path-integral strategies to address outstanding questions in these two subject areas.
Resumo:
Background: Despite known benefits of regular physical activity for health and well-being, many studies suggest that levels of physical activity in young people are low, and decline dramatically during adolescence. The purpose of the current research was to gather data on adolescent youth in order to inform the development of a targeted physical activity intervention. Methods: Cross-sectional data on physical activity levels (using self report and accelerometry), psychological correlates of physical activity, anthropometic characteristics, and the fundamental movement skill proficiency of 256 youth (53% male, 12.40 ± 0.51 years) were collected. A subsample (n = 59) participated in focus group interviews to explore their perceptions of health and identify barriers and motivators to participation in physical activity. Results: Findings indicate that the majority of youth (67%) were not accumulating the minimum 60 minutes of physical activity recommended daily for health, and that 99.5% did not achieve the fundamental movement skill proficiency expected for their age. Body mass index data showed that 25% of youth were classified as overweight or obese. Self-efficacy and physical activity attitude scores were significantly different (p < 0.05) between low, moderate and high active participants. Active and inactive youth reported differences in their perceived understanding of health and their barriers to physical activity participation, with active youth relating nutrition, exercise, energy and sports with the definition of ‘being healthy’, and inactive youth attributing primarily nutritional concepts to ‘being healthy’. Conclusions: Data show a need for targeting low levels of physical activity in youth through addressing poor health related activity knowledge and low fundamental movement skill proficiency. The Y-PATH intervention was developed in accordance with the present study findings; details of the intervention format are presented.
Resumo:
This thesis builds a framework for evaluating downside risk from multivariate data via a special class of risk measures (RM). The peculiarity of the analysis lies in getting rid of strong data distributional assumptions and in orientation towards the most critical data in risk management: those with asymmetries and heavy tails. At the same time, under typical assumptions, such as the ellipticity of the data probability distribution, the conformity with classical methods is shown. The constructed class of RM is a multivariate generalization of the coherent distortion RM, which possess valuable properties for a risk manager. The design of the framework is twofold. The first part contains new computational geometry methods for the high-dimensional data. The developed algorithms demonstrate computability of geometrical concepts used for constructing the RM. These concepts bring visuality and simplify interpretation of the RM. The second part develops models for applying the framework to actual problems. The spectrum of applications varies from robust portfolio selection up to broader spheres, such as stochastic conic optimization with risk constraints or supervised machine learning.