974 resultados para Explanatory Sequential Design
Resumo:
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.
This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.
Resumo:
Support vector machines (SVMs), though accurate, are not preferred in applications requiring high classification speed or when deployed in systems of limited computational resources, due to the large number of support vectors involved in the model. To overcome this problem we have devised a primal SVM method with the following properties: (1) it solves for the SVM representation without the need to invoke the representer theorem, (2) forward and backward selections are combined to approach the final globally optimal solution, and (3) a criterion is introduced for identification of support vectors leading to a much reduced support vector set. In addition to introducing this method the paper analyzes the complexity of the algorithm and presents test results on three public benchmark problems and a human activity recognition application. These applications demonstrate the effectiveness and efficiency of the proposed algorithm.
--------------------------------------------------------------------------------
Resumo:
The C-element logic gate is a key component for constructing asynchronous control in silicon integrated circuits. The purpose of this reported work is to introduce a new speed-independent C-element design, which is synthesised by the asynchronous Petrify design tool to ensure it is composed of sequential digital latches rather than complex gates. The benefits are that it guarantees correct speed-independent operation, together with easy integration in modern design flows and processes. It is compared to an equivalent speed-independent complex gate C-element design generated by Petrify in a 130 nm semiconductor process.
Resumo:
Waste management and sustainability are two core underlying philosophies that the construction sector must acknowledge and implement; however, this can prove difficult and time consuming. To this end, the aim of this paper is to examine waste management strategies and the possible benefits, advantages and disadvantages to their introduction and use, while also to examine any inter-relationship with sustainability, particularly at the design stage. The purpose of this paper is to gather, examine and review published works and investigate factors which influence economic decisions at the design phase of a construction project. In addressing this aim, a three tiered sequential research approach is adopted; in-depth literature review, interviews/focus groups and qualitative analysis. The resulting data is analyzed, discussed, with potential conclusions identified; paying particular attention to implications for practice within architectural firms. This research is of importance, particularly to the architectural sector, as it can add to the industry’s understanding of the design process, while also considering the application and integration of waste management into the design procedure. Results indicate that the researched topic had many advantages but also had inherent disadvantages. It was found that the potential advantages outweighed disadvantages, but uptake within industry was still slow and that better promotion and their benefits to; sustainability, the environment, society and the industry were required.
Resumo:
Simple meso-scale capacitor structures have been made by incorporating thin (300 nm) single crystal lamellae of KTiOPO4 (KTP) between two coplanar Pt electrodes. The influence that either patterned protrusions in the electrodes or focused ion beam milled holes in the KTP have on the nucleation of reverse domains during switching was mapped using piezoresponse force microscopy imaging. The objective was to assess whether or not variations in the magnitude of field enhancement at localised “hot-spots,” caused by such patterning, could be used to both control the exact locations and bias voltages at which nucleation events occurred. It was found that both the patterning of electrodes and the milling of various hole geometries into the KTP could allow controlled sequential injection of domain wall pairs at different bias voltages; this capability could have implications for the design and operation of domain wall electronic devices, such as memristors, in the future.
Resumo:
PURPOSE: To investigate whether myopia is becoming more common across Europe and explore whether increasing education levels, an important environmental risk factor for myopia, might explain any temporal trend.
DESIGN: Meta-analysis of population-based, cross-sectional studies from the European Eye Epidemiology (E(3)) Consortium.
PARTICIPANTS: The E(3) Consortium is a collaborative network of epidemiological studies of common eye diseases in adults across Europe. Refractive data were available for 61 946 participants from 15 population-based studies performed between 1990 and 2013; participants had a range of median ages from 44 to 78 years.
METHODS: Noncycloplegic refraction, year of birth, and highest educational level achieved were obtained for all participants. Myopia was defined as a mean spherical equivalent ≤-0.75 diopters. A random-effects meta-analysis of age-specific myopia prevalence was performed, with sequential analyses stratified by year of birth and highest level of educational attainment.
MAIN OUTCOME MEASURES: Variation in age-specific myopia prevalence for differing years of birth and educational level.
RESULTS: There was a significant cohort effect for increasing myopia prevalence across more recent birth decades; age-standardized myopia prevalence increased from 17.8% (95% confidence interval [CI], 17.6-18.1) to 23.5% (95% CI, 23.2-23.7) in those born between 1910 and 1939 compared with 1940 and 1979 (P = 0.03). Education was significantly associated with myopia; for those completing primary, secondary, and higher education, the age-standardized prevalences were 25.4% (CI, 25.0-25.8), 29.1% (CI, 28.8-29.5), and 36.6% (CI, 36.1-37.2), respectively. Although more recent birth cohorts were more educated, this did not fully explain the cohort effect. Compared with the reference risk of participants born in the 1920s with only primary education, higher education or being born in the 1960s doubled the myopia prevalence ratio-2.43 (CI, 1.26-4.17) and 2.62 (CI, 1.31-5.00), respectively-whereas individuals born in the 1960s and completing higher education had approximately 4 times the reference risk: a prevalence ratio of 3.76 (CI, 2.21-6.57).
CONCLUSIONS: Myopia is becoming more common in Europe; although education levels have increased and are associated with myopia, higher education seems to be an additive rather than explanatory factor. Increasing levels of myopia carry significant clinical and economic implications, with more people at risk of the sight-threatening complications associated with high myopia.
Resumo:
This study sought to explore the current state of Grades 4 to 8 science education in Ontario from the perspective of Junior/Intermediate (J/I) teachers. The study’s methodology was a sequential 2-phased mixed methods explanatory design denoted as QUAN (qual) qual. Data were collected from an online survey and follow-up interviews. J/I teachers (N = 219) from 48 school boards in Ontario completed a survey that collected both quantitative and qualitative data. Interviewees were selected from the survey participant population (n = 6) to represent a range of teaching strategies, attitudes toward teaching science, and years of experience. Survey and interview questions inquired about teacher attitudes toward teaching science, academic and professional experiences, teaching strategies, support resources, and instructional time allotments. Quantitative data analyses involved the descriptive statistics and chi-square tests. Qualitative data was coded inductively and deductively. Academic background in science was found to significantly influence teachers’ reported level of capability to teach science. The undergraduate degrees held by J/I science teachers were found to significantly influence their reported levels of capability to teach science. Participants identified a lack of time allocated for science instruction and inadequate equipment and facilities as major limitations on science instruction. Science in schools was reported to be of a “second-tiered” value to language and mathematics. Implications of this study include improving undergraduate and preservice experiences of elementary teachers by supporting their science content knowledge and pedagogical content knowledge.
Resumo:
En raison de sa force explicative et opérationnelle, la théorie du choix rationnel est utilisée au sein de plusieurs disciplines des sciences sociales. Alors que la majorité des économistes conçoivent la théorie du choix rationnel comme un processus de maximisation de l’utilité, la portée de ce modèle est le sujet de nombreuses critiques. Pour plusieurs, certaines préférences ne peuvent être modulées à l’intérieur de ce cadre. Dans ce mémoire, trois conceptions alternatives de la théorie du choix rationnel sont présentées : la rationalité comme présence virtuelle, la rationalité comme mécanisme intentionnel et la rationalité en tant que science du choix. Une analyse critique de celles-ci est effectuée. En design institutionnel, ces trois conceptions de la rationalité offrent des perspectives distinctes. La première met l’emphase sur les motivations non-égocentriques. La seconde mise sur l’aspect adaptatif du processus. La rationalité jouant un rôle privilégié, mais non exclusif, les mécanismes causaux doivent également être considérés. La troisième implique de formuler des règles institutionnels différentes dépendamment du modèle de l’agent rationnel qui est mis de l’avant. L’établissement de règles institutionnelles varie en fonction de la conception adoptée parmi ces théories du choix rationnel.
Resumo:
Cette étude propose d’identifier les facteurs affectant la consommation d’aliments traditionnels à travers une perspective écologique, afin de réduire les taux de prévalence élevés de maladies chroniques et ralentir la forte diminution de consommation d’aliments traditionnels chez les Cris du nord québécois. Pour ce faire, une méthode mixte « sequential explanatory », fut utilisée, combinant quatre groupes focus (n=23) et une régression logistique (n=374) à partir de données secondaires issues de trois études transversales. Selon les résultats de la régression logistique: l’âge, chasser, marcher, le niveau d’éducation et la communauté de résidence étaient associées à une consommation d’aliments traditionnelle trois fois/semaine (p<0,05). Subséquemment, des groupes focus vinrent enrichir et contredire ces résultats. Par exemple : les participants étaient en désaccord avec le fait qu’il n’y avait aucune association entre les aliments traditionnels et l’emploi. Ils croyaient que les personnes sans emploi ont plus d’opportunités pour aller chasser mais peu d’argent pour couvrir les dépenses et inversement pour ceux avec emploi. Ce double effet aurait possiblement fait disparaître l’association dans la régression logistique. Suite aux groupes focus, plusieurs facteurs furent identifiés et distribués dans un modèle écologique suggérant que la consommation d’aliments traditionnels est principalement influencée par des facteurs sociaux, communautaires et environnementaux et ne se limite pas aux facteurs individuels. En conclusion, afin de promouvoir l’alimentation traditionnelle, quatre suggestions de priorités d’action sont proposées. L’alimentation traditionnelle doit faire partie des stratégies de santé publique pour réduire les taux de maladies chroniques et améliorer le bien-être des populations autochtones.
Resumo:
La prise de décision est un processus computationnel fondamental dans de nombreux aspects du comportement animal. Le modèle le plus souvent rencontré dans les études portant sur la prise de décision est appelé modèle de diffusion. Depuis longtemps, il explique une grande variété de données comportementales et neurophysiologiques dans ce domaine. Cependant, un autre modèle, le modèle d’urgence, explique tout aussi bien ces mêmes données et ce de façon parcimonieuse et davantage encrée sur la théorie. Dans ce travail, nous aborderons tout d’abord les origines et le développement du modèle de diffusion et nous verrons comment il a été établi en tant que cadre de travail pour l’interprétation de la plupart des données expérimentales liées à la prise de décision. Ce faisant, nous relèveront ses points forts afin de le comparer ensuite de manière objective et rigoureuse à des modèles alternatifs. Nous réexaminerons un nombre d’assomptions implicites et explicites faites par ce modèle et nous mettrons alors l’accent sur certains de ses défauts. Cette analyse servira de cadre à notre introduction et notre discussion du modèle d’urgence. Enfin, nous présenterons une expérience dont la méthodologie permet de dissocier les deux modèles, et dont les résultats illustrent les limites empiriques et théoriques du modèle de diffusion et démontrent en revanche clairement la validité du modèle d'urgence. Nous terminerons en discutant l'apport potentiel du modèle d'urgence pour l'étude de certaines pathologies cérébrales, en mettant l'accent sur de nouvelles perspectives de recherche.
Resumo:
A potential fungal strain producing extracellular β-glucosidase enzyme was isolated from sea water and identified as ^ëéÉêJ Öáääìë=ëóÇçïáá BTMFS 55 by a molecular approach based on 28S rDNA sequence homology which showed 93% identity with already reported sequences of ^ëéÉêÖáääìë=ëóÇçïáá in the GenBank. A sequential optimization strategy was used to enhance the production of β-glucosidase under solid state fermentation (SSF) with wheat bran (WB) as the growth medium. The two-level Plackett-Burman (PB) design was implemented to screen medium components that influence β-glucosidase production and among the 11 variables, moisture content, inoculums, and peptone were identified as the most significant factors for β-glucosidase production. The enzyme was purified by (NH4)2SO4 precipitation followed by ion exchange chromatography on DEAE sepharose. The enzyme was a monomeric protein with a molecular weight of ~95 kDa as determined by SDS-PAGE. It was optimally active at pH 5.0 and 50°C. It showed high affinity towards éNPG and enzyme has a hã and sã~ñ of 0.67 mM and 83.3 U/mL, respectively. The enzyme was tolerant to glucose inhibition with a há of 17 mM. Low concentration of alcohols (10%), especially ethanol, could activate the enzyme. A considerable level of ethanol could produce from wheat bran and rice straw after 48 and 24 h, respectively, with the help of p~ÅÅÜ~êçãóÅÉë=ÅÉêÉîáëá~É in presence of cellulase and the purified β-glucosidase of ^ëéÉêÖáääìë=ëóÇçïáá BTMFS 55.
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
Resumo:
Optimum experimental designs depend on the design criterion, the model and the design region. The talk will consider the design of experiments for regression models in which there is a single response with the explanatory variables lying in a simplex. One example is experiments on various compositions of glass such as those considered by Martin, Bursnall, and Stillman (2001). Because of the highly symmetric nature of the simplex, the class of models that are of interest, typically Scheff´e polynomials (Scheff´e 1958) are rather different from those of standard regression analysis. The optimum designs are also rather different, inheriting a high degree of symmetry from the models. In the talk I will hope to discuss a variety of modes for such experiments. Then I will discuss constrained mixture experiments, when not all the simplex is available for experimentation. Other important aspects include mixture experiments with extra non-mixture factors and the blocking of mixture experiments. Much of the material is in Chapter 16 of Atkinson, Donev, and Tobias (2007). If time and my research allows, I would hope to finish with a few comments on design when the responses, rather than the explanatory variables, lie in a simplex. References Atkinson, A. C., A. N. Donev, and R. D. Tobias (2007). Optimum Experimental Designs, with SAS. Oxford: Oxford University Press. Martin, R. J., M. C. Bursnall, and E. C. Stillman (2001). Further results on optimal and efficient designs for constrained mixture experiments. In A. C. Atkinson, B. Bogacka, and A. Zhigljavsky (Eds.), Optimal Design 2000, pp. 225–239. Dordrecht: Kluwer. Scheff´e, H. (1958). Experiments with mixtures. Journal of the Royal Statistical Society, Ser. B 20, 344–360. 1
Resumo:
The conventional method for assessing acute oral toxicity (OECD Test Guideline 401) was designed to identify the median lethal dose (LD50), using the death of animals as an endpoint. Introduced as an alternative method (OECD Test Guideline 420), the Fixed Dose Procedure (FDP) relies on the observation of clear signs of toxicity, uses fewer animals and causes less suffering. More recently, the Acute Toxic Class method and the Up-and-Down Procedure have also been adopted as OECD test guidelines. Both of these methods also use fewer animals than the conventional method, although they still use death as an endpoint. Each of the three new methods incorporates a sequential dosing procedure, which results in increased efficiency. In 1999, with a view to replacing OECD Test Guideline 401, the OECD requested that the three new test guidelines be updated. This was to bring them in line with the regulatory needs of all OECD Member Countries, provide further reductions in the number of animals used, and introduce refinements to reduce the pain and distress experienced by the animals. This paper describes a statistical modelling approach for the evaluation of acute oral toxicity tests, by using the revised FDP for illustration. Opportunities for further design improvements are discussed.
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.