905 resultados para GENERAL LINEAR SUPERGROUP
Resumo:
Non-linear natural vibration characteristics and the dynamic response of hingeless and fully articulated rotors of rectangular cross-section are studied by using the finite element method. In the formulation of response problems, the global variables are augmented with appropriate additional variables, facilitating direct determination of sub-harmonic response. Numerical results are given showing the effect of the geometric non-linearity on the first three natural frequencies. Response analysis of typical rotors indicates a possibility of substantial sub-harmonic response especially in the fully articulated rotors widely adopted in helicopters.
Resumo:
The present study addressed the epistemology of teachers’ practical knowledge. Drawing from the literature, teachers’ practical knowledge is defined as all teachers’ cognitions (e.g., beliefs, values, motives, procedural knowing, and declarative knowledge) that guide their practice of teaching. The teachers’ reasoning that lies behind their practical knowledge is addressed to gain insight into its epistemic nature. I studied six class teachers’ practical knowledge; they teach in the metropolitan region of Helsinki. Relying on the assumptions of the phenomenographic inquiry, I collected and analyzed the data. I analyzed the data in two stages where the first stage involved an abductive procedure, and the second stage an inductive procedure for interpretation, and thus developed the system of categories. In the end, a quantitative analysis nested into the qualitative findings to study the patterns of the teachers’’ reasoning. The results indicated that teachers justified their practical knowledge based on morality and efficiency of action; efficiency of action was found to be presented in two different ways: authentic efficiency and naïve efficiency. The epistemic weight of morality was embedded in what I call “moral care”. The core intention of teachers in the moral care was the commitment that they felt about the “whole character” of students. From this perspective the “dignity” and the moral character of the students should not replaced for any other “instrumental price”. “Caring pedagogy” was the epistemic value of teachers’ reasoning in the authentic efficiency. The central idea in the caring pedagogy was teachers’ intentions to improve the “intellectual properties” of “all or most” of the students using “flexible” and “diverse” pedagogies. However, “regulating pedagogy” was the epistemic condition of practice in the cases corresponding to naïve efficiency. Teachers argued that an effective practical knowledge should regulate and manage the classroom activities, but the targets of the practical knowledge were mainly other “issues “or a certain percentage of the students. In these cases, the teachers’ arguments were mainly based on the notion of “what worked” regardless of reflecting on “what did not work”. Drawing from the theoretical background and the data, teachers’ practical knowledge calls for “praxial knowledge” when they used the epistemic conditions of “caring pedagogy” and “moral care”. It however calls for “practicable” epistemic status when teachers use the epistemic condition of regulating pedagogy. As such, praxial knowledge with the dimensions of caring pedagogy and moral care represents the “normative” perspective on teachers’ practical knowledge, and thus reflects a higher epistemic status in comparison to “practicable” knowledge, which represents a “descriptive” perception toward teachers’ practical knowledge and teaching.
Resumo:
Embryonic development involves diffusion and proliferation of cells, as well as diffusion and reaction of molecules, within growing tissues. Mathematical models of these processes often involve reaction–diffusion equations on growing domains that have been primarily studied using approximate numerical solutions. Recently, we have shown how to obtain an exact solution to a single, uncoupled, linear reaction–diffusion equation on a growing domain, 0 < x < L(t), where L(t) is the domain length. The present work is an extension of our previous study, and we illustrate how to solve a system of coupled reaction–diffusion equations on a growing domain. This system of equations can be used to study the spatial and temporal distributions of different generations of cells within a population that diffuses and proliferates within a growing tissue. The exact solution is obtained by applying an uncoupling transformation, and the uncoupled equations are solved separately before applying the inverse uncoupling transformation to give the coupled solution. We present several example calculations to illustrate different types of behaviour. The first example calculation corresponds to a situation where the initially–confined population diffuses sufficiently slowly that it is unable to reach the moving boundary at x = L(t). In contrast, the second example calculation corresponds to a situation where the initially–confined population is able to overcome the domain growth and reach the moving boundary at x = L(t). In its basic format, the uncoupling transformation at first appears to be restricted to deal only with the case where each generation of cells has a distinct proliferation rate. However, we also demonstrate how the uncoupling transformation can be used when each generation has the same proliferation rate by evaluating the exact solutions as an appropriate limit.
Resumo:
A novel thermistor-based temperature indicator using an RC oscillator and an up/down counter has been developed and described. The indicator provides linear performance over a wide dynamic temperature range of 0-100°C. This indicator is free from the error due to lead resistances of the thermistor and gives a maximum error of ±0 · 1°C in the range 0-100°C. Test results are given to support the theory.
Resumo:
One history in a multicomplex world The quintessence of history and grand historical narratives in the historical consciousness of class teacher students The study analyses the conception of history amongst class teacher students at the University of Helsinki. It also explores the expectations about the future that the students have on the basis of their views on history. The conceptions of the students are analysed against the background of the notion of one history which has been part of Western thought in the modern era and which is at the centre of the theoretical framework of this study. The Enlightenment project and the erosion of the role of the Church paved the way for the notion that history is an linear narrative of the progress of humankind and in which, implicitly, the Western countries are endowed with a special role as the vanguards of progress. In recent times these assumptions have been criticised by postmodernists and proponents of New History. The material of the study consists of interviews of twenty-two 19 26 years old class teacher students at the University of Helsinki. The topics in the interviews were the developments of the past and the future trajectories. The students conceived history as a field of knowledge that provides a unifying view on the world and helps to make today s world intelligible. Finnish history and global history were invested with features of a grand narrative of progress. In global history, progress and development were seen as characteristic of the Western world primarily. The students regarded the post-war Finnish history as a qualified success story in that they deplored the erosion of collectivist values and the rise of selfishness in recent decades. History was not conceived as a process of progress that would self-evidently continue in the future, but rather more as a field of contingency and cyclical change.The students regarded the increasing predominance of the market forces over democratically elected agencies, the antagonism between the West and the other parts of the world, and environmental risks as the major threats. Notwithstanding this general.pessimism about the future, the students had a very positive view of their own personal prospects. Keywords: historical consciouness, one history, future expectations
Resumo:
The Cape York Peninsula Land Use Strategy (CYPLUS) is a joint Queensland/Commonwealth initiative to provide a framework for making decisions about how to use and manage the natural resources of Cape York Peninsula in ways that will be ecologically sustainable. As part of the Natural Resources Analysis Program (NRAP) of CYPLUS, the Fisheries Division of the Queensland Department of Primary Industries has mapped the marine vegetation (mangroves and seagrasses) for Cape York Peninsula. The project ran from July 1992 to June 1994. Field work was undertaken in November 1992, May 1993, and April 1994. Final report on project: NRO6 – Marine Plan (Seagrass/Mangrove) Distribution. Dataset URL Link: Queensland Coastal Wetlands Resources Mapping data. [Dataset]
Resumo:
Galerkin representations and integral representations are obtained for the linearized system of coupled differential equations governing steady incompressible flow of a micropolar fluid. The special case of 2-dimensional Stokes flows is then examined and further representation formulae as well as asymptotic expressions, are generated for both the microrotation and velocity vectors. With the aid of these formulae, the Stokes Paradox for micropolar fluids is established.
Resumo:
We compared daily net radiation (Rn) estimates from 19 methods with the ASCE-EWRI Rn estimates in two climates: Clay Center, Nebraska (sub-humid) and Davis, California (semi-arid) for the calendar year. The performances of all 20 methods, including the ASCE-EWRI Rn method, were then evaluated against Rn data measured over a non-stressed maize canopy during two growing seasons in 2005 and 2006 at Clay Center. Methods differ in terms of inputs, structure, and equation intricacy. Most methods differ in estimating the cloudiness factor, emissivity (e), and calculating net longwave radiation (Rnl). All methods use albedo (a) of 0.23 for a reference grass/alfalfa surface. When comparing the performance of all 20 Rn methods with measured Rn, we hypothesized that the a values for grass/alfalfa and non-stressed maize canopy were similar enough to only cause minor differences in Rn and grass- and alfalfa-reference evapotranspiration (ETo and ETr) estimates. The measured seasonal average a for the maize canopy was 0.19 in both years. Using a = 0.19 instead of a = 0.23 resulted in 6% overestimation of Rn. Using a = 0.19 instead of a = 0.23 for ETo and ETr estimations, the 6% difference in Rn translated to only 4% and 3% differences in ETo and ETr, respectively, supporting the validity of our hypothesis. Most methods had good correlations with the ASCE-EWRI Rn (r2 > 0.95). The root mean square difference (RMSD) was less than 2 MJ m-2 d-1 between 12 methods and the ASCE-EWRI Rn at Clay Center and between 14 methods and the ASCE-EWRI Rn at Davis. The performance of some methods showed variations between the two climates. In general, r2 values were higher for the semi-arid climate than for the sub-humid climate. Methods that use dynamic e as a function of mean air temperature performed better in both climates than those that calculate e using actual vapor pressure. The ASCE-EWRI-estimated Rn values had one of the best agreements with the measured Rn (r2 = 0.93, RMSD = 1.44 MJ m-2 d-1), and estimates were within 7% of the measured Rn. The Rn estimates from six methods, including the ASCE-EWRI, were not significantly different from measured Rn. Most methods underestimated measured Rn by 6% to 23%. Some of the differences between measured and estimated Rn were attributed to the poor estimation of Rnl. We conducted sensitivity analyses to evaluate the effect of Rnl on Rn, ETo, and ETr. The Rnl effect on Rn was linear and strong, but its effect on ETo and ETr was subsidiary. Results suggest that the Rn data measured over green vegetation (e.g., irrigated maize canopy) can be an alternative Rn data source for ET estimations when measured Rn data over the reference surface are not available. In the absence of measured Rn, another alternative would be using one of the Rn models that we analyzed when all the input variables are not available to solve the ASCE-EWRI Rn equation. Our results can be used to provide practical information on which method to select based on data availability for reliable estimates of daily Rn in climates similar to Clay Center and Davis.
Resumo:
Aims and objectives To determine consensus across acute care specialty areas on core physical assessment skills necessary for early recognition of changes in patient status in general wards. Background Current approaches to physical assessment are inconsistent and have not evolved to meet increased patient and system demands. New models of nursing assessment are needed in general wards that ensure a proactive and patient safety approach. Design A modified Delphi study. Methods Focus group interviews with 150 acute care registered nurses (RNs) at a large tertiary referral hospital generated a framework of core skills that were developed into a web-based survey. We then sought consensus with a panel of 35 senior acute care RNs following a classical Delphi approach over three rounds. Consensus was predefined as at least 80% agreement for each skill across specialty areas. Results Content analysis of focus group transcripts identified 40 discrete core physical assessment skills. In the Delphi rounds, 16 of these were consensus validated as core skills and were conceptually aligned with the primary survey: (Airway) Assess airway patency; (Breathing) Measure respiratory rate, Evaluate work of breathing, Measure oxygen saturation; (Circulation) Palpate pulse rate and rhythm, Measure blood pressure by auscultation, Assess urine output; (Disability) Assess level of consciousness, Evaluate speech, Assess for pain; (Exposure) Measure body temperature, Inspect skin integrity, Inspect and palpate skin for signs of pressure injury, Observe any wounds, dressings, drains and invasive lines, Observe ability to transfer and mobilise, Assess bowel movements. Conclusions Among a large and diverse group of experienced acute care RNs consensus was achieved on a structured core physical assessment to detect early changes in patient status. Relevance to clinical practice Although further research is needed to refine the model, clinical application should promote systematic assessment and clinical reasoning at the bedside.
Resumo:
The use of capacitors for electrical energy storage actually predates the invention of the battery. Alessandro Volta is attributed with the invention of the battery in 1800, where he first describes a battery as an assembly of plates of two different materials (such as copper and zinc) placed in an alternating stack and separated by paper soaked in brine or vinegar [1]. Accordingly, this device was referred to as Volta’s pile and formed the basis of subsequent revolutionary research and discoveries on the chemical origin of electricity. Before the advent of Volta’s pile, however, eighteenth century researchers relied on the use of Leyden jars as a source of electrical energy. Built in the mid-1700s at the University of Leyden in Holland, a Leyden jar is an early capacitor consisting of a glass jar coated inside and outside with a thin layer of silver foil [2, 3]. With the outer foil being grounded, the inner foil could be charged with an electrostatic generator, or a source of static electricity, and could produce a strong electrical discharge from a small and comparatively simple device.
Resumo:
A model of root water extraction is proposed, in which a linear variation of extraction rate with depth is assumed. Five crops are chosen for simulation studies of the model, and soil moisture depletion under optimal conditions from different layers for each crop is calculated. Similar calculations are also made using the constant extraction rate model. Rooting depth is assumed to vary linearly with potential evapotranspiration for each crop during the vegetative phase. The calculated depletion patterns are compared with measured mean depletion patterns for each crop. It is shown that the constant extraction rate model results in large errors in the prediction of soil moisture depletion, while the proposed linear extraction rate model gives satisfactory results. Hypothetical depletion patterns predicted by the model in combination with a moisture tension-dependent sink term developed elsewhere are indicated.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.
Resumo:
The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.
Resumo:
Estimation of secondary structure in polypeptides is important for studying their structure, folding and dynamics. In NMR spectroscopy, such information is generally obtained after sequence specific resonance assignments are completed. We present here a new methodology for assignment of secondary structure type to spin systems in proteins directly from NMR spectra, without prior knowledge of resonance assignments. The methodology, named Combination of Shifts for Secondary Structure Identification in Proteins (CSSI-PRO), involves detection of specific linear combination of backbone H-1(alpha) and C-13' chemical shifts in a two-dimensional (2D) NMR experiment based on G-matrix Fourier transform (GFT) NMR spectroscopy. Such linear combinations of shifts facilitate editing of residues belonging to alpha-helical/beta-strand regions into distinct spectral regions nearly independent of the amino acid type, thereby allowing the estimation of overall secondary structure content of the protein. Comparison of the predicted secondary structure content with those estimated based on their respective 3D structures and/or the method of Chemical Shift Index for 237 proteins gives a correlation of more than 90% and an overall rmsd of 7.0%, which is comparable to other biophysical techniques used for structural characterization of proteins. Taken together, this methodology has a wide range of applications in NMR spectroscopy such as rapid protein structure determination, monitoring conformational changes in protein-folding/ligand-binding studies and automated resonance assignment.