943 resultados para Fourth order method
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
Absolute abundances (concentrations) of dinoflagellate cysts are often determined through the addition of Lycopodium clavatum marker-grains as a spike to a sample before palynological processing. An inter-laboratory calibration exercise was set up in order to test the comparability of results obtained in different laboratories, each using its own preparation method. Each of the 23 laboratories received the same amount of homogenized splits of four Quaternary sediment samples. The samples originate from different localities and consisted of a variety of lithologies. Dinoflagellate cysts were extracted and counted, and relative and absolute abundances were calculated. The relative abundances proved to be fairly reproducible, notwithstanding a need for taxonomic calibration. By contrast, excessive loss of Lycopodium spores during sample preparation resulted in non-reproducibility of absolute abundances. Use of oxidation, KOH, warm acids, acetolysis, mesh sizes larger than 15 µm and long ultrasonication (> 1 min) must be avoided to determine reproducible absolute abundances. The results of this work therefore indicate that the dinoflagellate cyst worker should make a choice between using the proposed standard method which circumvents critical steps, adding Lycopodium tablets at the end of the preparation and using an alternative method.
Resumo:
In this dissertation we propose a Teaching Unit of Physics to teach content through environmental discussions of the greenhouse effect and global warming. This teaching unit is based on a problem-methodological intervention from the application of the method of the Arch of Charles Maguerez. The methodological foundations of the thesis are embedded in action research and this is structured in five chapters: the first chapter deals with the Physical Environment (FMA) as a subject in Degree Courses in Physics in Brazil, bringing the concern of how this discipline has been taught. We started the first chapter explaining the reasons behind the inclusion of the discipline of Physical Environment in a Physics Degree Courses. Then we did a search on the websites of Institutions of Higher Education, to know of the existence or not of this discipline on curricular. We then analyzed the menus to see what bibliographies are being adopted and what content of Physics are being worked, and how it has been done. The courses surveyed were those of Federal and Federal Institutes Universities. Thus ended the first chapter. Given the inseparability between studies in Physics Teaching and studies on competencies, skills and significant learning, wrote the second chapter. In this chapter we discuss the challenge of converting information into knowledge. Initially on initial teacher training, because even if this is not our focus, the study is a discipline on the upper reaches, therefore, offered to future teachers. Then we talked about the culture of knowledge, where we emphasize the use of a teaching approach that promotes meanings taught by content and make sense to the student. We finished the third chapter, making some considerations on skills and abilities, in order to identify what skills and competencies were developed and worked during and after the implementation of Curriculum Unit. The third chapter is the result of a literature review and study of the radioactive EarthSun interaction. The subjects researched approach from the generation of energy in the sun to topics stain solar coronal mass ejections, solar wind, black body radiation, Wien displacement law, Stefan-Boltzmann Law, greenhouse effect and global warming. This chapter deals with material support for the teacher of the aforementioned discipline. The fourth chapter talks about the arc method of Charles Maguerez; Here we explain the structure of each of the five steps of the Arc and how to use them in teaching. We also show another version of this method adapted by Bordenave. In the fifth and final chapter brought a description of how the method of Arc was used in physics classes of Environment, with students majoring in Physics IFRN Campus Santa Cruz. Here, in this chapter, a transcript of classes to show how was the application of a problem-based methodology in the teaching of content proposed for Physics Teaching Unit from the environmental discussion about the greenhouse effect and global warming phenomena
Resumo:
In this study we evaluated the capacity removal of PAHs in an oily solution between the bentonite hydrofobized with linseed oil and paraffin with natural bentonite. Analyses of natural bentonite and hydrofobized were made by the characterization techniques: (1) Thermogravimetric Analysis (TGA), which aimed to evaluate the thermal events due to mass loss, both associated with the exit of moisture and decomposition of clay as due to hidrofobizante loss agent. (2) Analysis of X-ray diffraction (XRD) in order to determine the mineralogical phases that make up the structure of clay and (3) Spectrophotometry in the infrared region used to characterize the functional groups of both the matrix mineral (bentonite) and the hidrofobizantes agents (linseed oil and paraffin). We used a factorial design 24 with the following factors; hidrofobizante, percent hidrofobizante, adsorption time and volume of the oily solution. Analyzing the factorial design 24 was seen that none of the factors apparently was more important than the others and, as all responses showed significant values in relation to the ability of oil removal was not possible to evaluate a difference in the degree of efficiency the two hidrofobizantes. For the new study compared the efficiency of the modified clay, with each hidrofobizante separately in relation to their natural form. As such, there are four new factorial designs 23 using natural bentonite as a differentiating factor. The factors used were bentonite (with and without hydrophobization), exposure time of the adsorbent material to the oily solution and volume of an oily solution, trying to interpret how these factors could influence the process of purifying water contaminated with PAHs. Was employed as a technique for obtaining responses to fluorescence spectroscopy, as already known from literature that PAHs, for presenting combined chains due to condensation of the aromatic rings fluoresce quite similar when excited in the ultraviolet region and as an auxiliary technique to gas chromatography / mass spectrometry (GC-MS) used for the analysis of PAHs in order to complement the study of fluorescence spectroscopy, since the spectroscopic method only allows you an idea of total number of fluorescent species contained in the oil soluble. The result shows an excellent adsorption of PAHs and other fluorescent species assigned to the main effect of the first factor, hydrophobization for the first planning 23 BNTL 5%, for 93% the sixth stop in the second test (+-+),factorial design 23 BNTL 10%, the fourth test (++-) with 94.5% the third factorial design 23 BNTP 5%, the second test (+--) with 91% and the fourth and final planning 23 BNTP 10%, the last test ( + + +) with 88%. Compared with adsorption of bentonite in its natural form. This work also shows the maximum adsorption of each hidrofobizante
Resumo:
In this thesis, a numerical program has been developed to simulate the wave-induced ship motions in the time domain. Wave-body interactions have been studied for various ships and floating bodies through forced motion and free motion simulations in a wide range of wave frequencies. A three-dimensional Rankine panel method is applied to solve the boundary value problem for the wave-body interactions. The velocity potentials and normal velocities on the boundaries are obtained in the time domain by solving the mixed boundary integral equations in relation to the source and dipole distributions. The hydrodynamic forces are calculated by the integration of the instantaneous hydrodynamic pressures over the body surface. The equations of ship motion are solved simultaneously with the boundary value problem for each time step. The wave elevation is computed by applying the linear free surface conditions. A numerical damping zone is adopted to absorb the outgoing waves in order to satisfy the radiation condition for the truncated free surface. A numerical filter is applied on the free surface for the smoothing of the wave elevation. Good convergence has been reached for both forced motion simulations and free motion simulations. The computed added-mass and damping coefficients, wave exciting forces, and motion responses for ships and floating bodies are in good agreement with the numerical results from other programs and experimental data.
Resumo:
We propose and examine an integrable system of nonlinear equations that generalizes the nonlinear Schrodinger equation to 2 + 1 dimensions. This integrable system of equations is a promising starting point to elaborate more accurate models in nonlinear optics and molecular systems within the continuum limit. The Lax pair for the system is derived after applying the singular manifold method. We also present an iterative procedure to construct the solutions from a seed solution. Solutions with one-, two-, and three-lump solitons are thoroughly discussed.
Resumo:
“Spaces of Order” argues that the African novel should be studied as a revolutionary form characterized by aesthetic innovations that are not comprehensible in terms of the novel’s European archive of forms. It does this by mapping an African spatial order that undermines the spatial problematic at the formal and ideological core of the novel—the split between a private, subjective interior, and an abstract, impersonal outside. The project opens with an examination of spatial fragmentation as figured in the “endless forest” of Amos Tutuola’s The Palmwine Drinkard (1952). The second chapter studies Chinua Achebe’s Things Fall Apart (1958) as a fictional world built around a peculiar category of space, the “evil forest,” which constitutes an African principle of order and modality of power. Chapter three returns to Tutuola via Ben Okri’s The Famished Road (1991) and shows how the dispersal of fragmentary spaces of exclusion and terror within the colonial African city helps us conceive of political imaginaries outside the nation and other forms of liberal political communities. The fourth chapter shows Nnedi Okorafor—in her 2014 science-fiction novel Lagoon—rewriting Things Fall Apart as an alien-encounter narrative in which Africa is center-stage of a planetary, multi-species drama. Spaces of Order is a study of the African novel as a new logic of world making altogether.
Resumo:
This paper proposes extended nonlinear analytical models, third-order models, of compliant parallelogram mechanisms. These models are capable of capturing the accurate effects from the very large axial force within the transverse motion range of 10% of the beam length through incorporating the terms associated with the high-order (up to third-order) axial force. Firstly, the free-body diagram method is employed to derive the nonlinear analytical model for a basic compliant parallelogram mechanism based on load-displacement relations of a single beam, geometry compatibility conditions, and load-equilibrium conditions. The procedures for the forward solutions and inverse solutions are described. Nonlinear analytical models for guided compliant multi-beam parallelogram mechanisms are then obtained. A case study of the compound compliant parallelogram mechanism, composed of two basic compliant parallelogram mechanisms in symmetry, is further implemented. This work intends to estimate the internal axial force change, the transverse force change, and the transverse stiffness change with the transverse motion using the proposed third-order model in comparison with the first-order model proposed in the prior art. In addition, FEA (finite element analysis) results validate the accuracy of the third-order model for a typical example. It is shown that in the case study the slenderness ratio affects the result discrepancy between the third-order model and the first-order model significantly, and the third-order model can illustrate a non-monotonic transverse stiffness curve if the beam is thin enough.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.
Resumo:
From a sociocultural perspective, individuals learn best from contextualized experiences. In preservice teacher education, contextualized experiences include authentic literacy experiences, which include a real reader and writer and replicate real life communication. To be prepared to teach well, preservice teachers need to gain literacy content knowledge and possess reading maturity. The purpose of this study was to examine the effect of authentic literacy experiences as Book Buddies with Hispanic fourth graders on preservice teachers’ literacy content knowledge and reading maturity. The study was a pretest/posttest design conducted over 12 weeks. Preservice teacher participants, the focus of the study, were elementary education majors taking the third of four required reading courses in non-probabilistic convenience groups, 43 (n = 33 experimental, n = 10 comparison) Elementary Education majors. The Survey of Preservice Teachers’ Knowledge of Teaching and Technology (SPTKTT), specifically designed for preservice teachers majoring in elementary or early childhood education and the Reading Maturity Survey (RMS) were used in this study. Preservice teachers chose either the experimental or comparison group based on the opportunity to earn extra credit points (experimental = 30 points, comparison = 15). After exchanging introductory letters preservice teachers and Hispanic fourth graders each read four books. After reading each book preservice teachers wrote letters to their student asking higher order thinking questions. Preservice teachers received scanned copies of their student’s unedited letters via email which enabled them to see their student’s authentic answers and writing levels. A series of analyses of covariance were used to determine whether there were significant differences in the dependent variables between the experimental and comparison groups. This quasi-experimental study tested two hypotheses. Using the appropriate pretest scores as covariates for adjusting the posttest means of the subcategory Literacy Content Knowledge (LCK), of the SPTKTT and the RMS, the mean adjusted posttest scores from the experimental group and comparison group were compared. No significant differences were found on the LCK dependent variable using the .05 level of significance, which may be due to Type II error caused by the small sample size. Significant differences were found on RMS using the .05 level of significance.
Resumo:
There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.
Resumo:
Body size is a key determinant of metabolic rate, but logistical constraints have led to a paucity of energetics measurements from large water-breathing animals. As a result, estimating energy requirements of large fish generally relies on extrapolation of metabolic rate from individuals of lower body mass using allometric relationships that are notoriously variable. Swim-tunnel respirometry is the ‘gold standard’ for measuring active metabolic rates in water-breathing animals, yet previous data are entirely derived from body masses <10 kg – at least one order of magnitude lower than the body masses of many top-order marine predators. Here, we describe the design and testing of a new method for measuring metabolic rates of large water-breathing animals: a c. 26 000 L seagoing ‘mega-flume’ swim-tunnel respirometer. We measured the swimming metabolic rate of a 2·1-m, 36-kg zebra shark Stegostoma fasciatum within this new mega-flume and compared the results to data we collected from other S. fasciatum (3·8–47·7 kg body mass) swimming in static respirometers and previously published measurements of active metabolic rate measurements from other shark species. The mega-flume performed well during initial tests, with intra- and interspecific comparisons suggesting accurate metabolic rate measurements can be obtained with this new tool. Inclusion of our data showed that the scaling exponent of active metabolic rate with mass for sharks ranging from 0·13 to 47·7 kg was 0·79; a similar value to previous estimates for resting metabolic rates in smaller fishes. We describe the operation and usefulness of this new method in the context of our current uncertainties surrounding energy requirements of large water-breathing animals. We also highlight the sensitivity of mass-extrapolated energetic estimates in large aquatic animals and discuss the consequences for predicting ecosystem impacts such as trophic cascades.
Resumo:
Body size is a key determinant of metabolic rate, but logistical constraints have led to a paucity of energetics measurements from large water-breathing animals. As a result, estimating energy requirements of large fish generally relies on extrapolation of metabolic rate from individuals of lower body mass using allometric relationships that are notoriously variable. Swim-tunnel respirometry is the ‘gold standard’ for measuring active metabolic rates in water-breathing animals, yet previous data are entirely derived from body masses <10 kg – at least one order of magnitude lower than the body masses of many top-order marine predators. Here, we describe the design and testing of a new method for measuring metabolic rates of large water-breathing animals: a c. 26 000 L seagoing ‘mega-flume’ swim-tunnel respirometer. We measured the swimming metabolic rate of a 2·1-m, 36-kg zebra shark Stegostoma fasciatum within this new mega-flume and compared the results to data we collected from other S. fasciatum (3·8–47·7 kg body mass) swimming in static respirometers and previously published measurements of active metabolic rate measurements from other shark species. The mega-flume performed well during initial tests, with intra- and interspecific comparisons suggesting accurate metabolic rate measurements can be obtained with this new tool. Inclusion of our data showed that the scaling exponent of active metabolic rate with mass for sharks ranging from 0·13 to 47·7 kg was 0·79; a similar value to previous estimates for resting metabolic rates in smaller fishes. We describe the operation and usefulness of this new method in the context of our current uncertainties surrounding energy requirements of large water-breathing animals. We also highlight the sensitivity of mass-extrapolated energetic estimates in large aquatic animals and discuss the consequences for predicting ecosystem impacts such as trophic cascades.
Resumo:
Este trabajo pretende explorar la dimensión ritual en los Textos de las Pirámides, el corpus de literatura religiosa extensa más antiguo de la humanidad. La naturaleza variada de sus componentes textuales ha impedido que los egiptólogos comprendan en profundidad las complejidades de la colección y los contextos originales en los que estos textos (ritos) aparecieron. La aplicación de la teoría del ritual, principalmente la aproximación de la sintaxis ritual, ofrece a los investigadores un marco excelente de análisis e interpretación del corpus, su estructura y función. Sujeto a las reglas de la sintaxis ritual es posible exponer los múltiples niveles de significado en el corpus para la resurrección y salvación del difunto.