771 resultados para merits of mandatory reporting of neglect
Resumo:
Understanding the ways in which teachers make sense of what they do and why is critical to a broader understanding of pedagogy. Historically, teachers have been understood through the thematic and content analysis of their beliefs or philosophies. In this paper, we argue that discourse analysis (DA) involves a much finer-grained analysis of the ‘lifeworlds’ of teachers and, in our view, provides a more detailed canvas from which inferences can be made. Our argument is structured in four parts. We begin by locating DA within the physical education (PE) literature and discuss what others have referred to as its relatively modest use. Following our location of DA, we outline a conceptual framework that we regard as useful, which contains six interrelated principles. We then introduce the idea of interpretive repertoires, which we consider to have particular explanatory power as well as being a sophisticated way to represent the subjectivities of PE teachers. Finally, we discuss the methodological strengths of interpretive repertoires. The paper concludes with a discussion on the theoretical and practical merits of adopting DA to analyse problems within PE.
Resumo:
In this paper, non-linear programming techniques are applied to the problem of controlling the vibration pattern of a stretched string. First, the problem of finding the magnitudes of two control forces applied at two points l1 and l2 on the string to reduce the energy of vibration over the interval (l1, l2) relative to the energy outside the interval (l1, l2) is considered. For this problem the relative merits of various methods of non-linear programming are compared. The more complicated problem of finding the positions and magnitudes of two control forces to obtain the desired energy pattern is then solved by using the slack unconstrained minimization technique with the Fletcher-Powell search. In the discussion of the results it is shown that the position of the control force is very important in controlling the energy pattern of the string.
Resumo:
The Hybrid approach introduced by the authors for at-site modeling of annual and periodic streamflows in earlier works is extended to simulate multi-site multi-season streamflows. It bears significance in integrated river basin planning studies. This hybrid model involves: (i) partial pre-whitening of standardized multi-season streamflows at each site using a parsimonious linear periodic model; (ii) contemporaneous resampling of the resulting residuals with an appropriate block size, using moving block bootstrap (non-parametric, NP) technique; and (iii) post-blackening the bootstrapped innovation series at each site, by adding the corresponding parametric model component for the site, to obtain generated streamflows at each of the sites. It gains significantly by effectively utilizing the merits of both parametric and NP models. It is able to reproduce various statistics, including the dependence relationships at both spatial and temporal levels without using any normalizing transformations and/or adjustment procedures. The potential of the hybrid model in reproducing a wide variety of statistics including the run characteristics, is demonstrated through an application for multi-site streamflow generation in the Upper Cauvery river basin, Southern India. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Nucleation is the first step in a phase transition where small nuclei of the new phase start appearing in the metastable old phase, such as the appearance of small liquid clusters in a supersaturated vapor. Nucleation is important in various industrial and natural processes, including atmospheric new particle formation: between 20 % to 80 % of atmospheric particle concentration is due to nucleation. These atmospheric aerosol particles have a significant effect both on climate and human health. Different simulation methods are often applied when studying things that are difficult or even impossible to measure, or when trying to distinguish between the merits of various theoretical approaches. Such simulation methods include, among others, molecular dynamics and Monte Carlo simulations. In this work molecular dynamics simulations of the homogeneous nucleation of Lennard-Jones argon have been performed. Homogeneous means that the nucleation does not occur on a pre-existing surface. The simulations include runs where the starting configuration is a supersaturated vapor and the nucleation event is observed during the simulation (direct simulations), as well as simulations of a cluster in equilibrium with a surrounding vapor (indirect simulations). The latter type are a necessity when the conditions prevent the occurrence of a nucleation event in a reasonable timeframe in the direct simulations. The effect of various temperature control schemes on the nucleation rate (the rate of appearance of clusters that are equally able to grow to macroscopic sizes and to evaporate) was studied and found to be relatively small. The method to extract the nucleation rate was also found to be of minor importance. The cluster sizes from direct and indirect simulations were used in conjunction with the nucleation theorem to calculate formation free energies for the clusters in the indirect simulations. The results agreed with density functional theory, but were higher than values from Monte Carlo simulations. The formation energies were also used to calculate surface tension for the clusters. The sizes of the clusters in the direct and indirect simulations were compared, showing that the direct simulation clusters have more atoms between the liquid-like core of the cluster and the surrounding vapor. Finally, the performance of various nucleation theories in predicting simulated nucleation rates was investigated, and the results among other things highlighted once again the inadequacy of the classical nucleation theory that is commonly employed in nucleation studies.
Resumo:
The relations between partial and integral properties of ternary solutions along composition trajectories suggested by Kohler, Colinet and Jacob, and along an arbitrary path are derived. The chemical potentials of the components are related to the slope of integral free energy by expressions involving the binary compositions generated by the intersections of the composition trajectory with the sides of the ternary triangle. Only along the Kohler composition trajectory it is possible to derive the integral free energy from the variation of the chemical potential of a single component with composition or vice versa. Along all other paths the differential of the integral free energy is related to two chemical potentials. The Gibbs-Duhem integration proposed by Darken for the ternary system uses the Kohler isogram. The relative merits of different limits for integration are discussed.
Resumo:
The removal of native oxide from Si (1 1 1) surfaces was investigated by X-ray photoelectron spectroscopy (XPS) and secondary ion mass spectra (SIMS) depth profiles. Two different oxide removal methods, performed under ultrahigh-vacuum (UHV) conditions, were carried out and compared. The first cleaning method is thermal desorption of oxide at 900 degrees C. The second method is the deposition of metallic gallium followed by redesorption. A significant decrease in oxygen was achieved by thermal desorption at 900 degrees C under UHV conditions. By applying a subsequent Ga deposition/redesorption, a further reduction in oxygen could be achieved. We examine the merits of an alternative oxide desorption method via conversion of the stable SiO(2) surface oxide into a volatile Ca(2)O oxide by a supply of Ga metals. Furthermore, ultra thin films of pure silicon nitride buffer layer were grown on a Si (1 1 1) surface by exposing the surface to radio-frequency (RF) nitrogen plasma followed by GaN growth. The SIMS depth profile shows that the oxygen impurity can be reduced at GaN/beta-Si(3)N(4)/Si interfaces by applying a subsequent Ga deposition/redesorption. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Equations for the computation of integral and partial thermodynamic properties of mixing in quarternary systems are derived using data on constituent binary systems and shortest distance composition paths to the binaries. The composition path from a quarternary composition to the i-j binary is characterized by a constant value of (Xi − Xj). The merits of this composition path over others with constant values for View the MathML source or Xi are discussed. Finally the equations are generalized for higher order systems. They are exact for regular solutions, but may be used in a semiempirical mode for non-regular solutions.
Resumo:
The paper proposes a study of symmetrical and related components, based on the theory of linear vector spaces. Using the concept of equivalence, the transformation matrixes of Clarke, Kimbark, Concordia, Boyajian and Koga are shown to be column equivalent to Fortescue's symmetrical-component transformation matrix. With a constraint on power, criteria are presented for the choice of bases for voltage and current vector spaces. In particular, it is shown that, for power invariance, either the same orthonormal (self-reciprocal) basis must be chosen for both voltage and current vector spaces, or the basis of one must be chosen to be reciprocal to that of the other. The original �¿, ��, 0 components of Clarke are modified to achieve power invariance. For machine analysis, it is shown that invariant transformations lead to reciprocal mutual inductances between the equivalent circuits. The relative merits of the various components are discussed.
Resumo:
Demand for cost-effective manufacturing techniques led to the development of near-net-shape processes. Squeeze casting is one such established effort. This process enjoys the combined merits of casting and forging. Squeeze casting imparts soundness comparable to that of wrought products while maintaining isotropic nature. Aluminum alloys and zinc alloys have been successfully processed through squeeze casting, but copper and copper alloys do not seem to have been attempted. Considering the capability of squeeze casting process, it is reasonable to expect properties different from that of conventionally cast copper. This paper presents the details of a systematic investigation wherein optimum process parameters for the squeeze casting of pure copper were established. Microstructure of squeeze-cast copper has been found to be significantly different from that of conventionally cast copper, and the dendrite arm spacing is much smaller. In addition to the room temperature mechanical properties, elevated temperature properties of copper are also appreciably improved by squeeze casting.
Resumo:
Low grade thermal energy from sources such as solar, geothermal and industrial waste heat in the temperature range of 380-425 K can be converted to electrical energy with reasonable efficiency using isopentane and R-245fa. While the former is flammable and the latter has considerable global warming potential, their mixture in 0.7/0.3 mole fraction is shown to obviate these disadvantages and yet retain dominant merits of each fluid. A realistic thermodynamic analysis is carried out wherein the possible sources of irreversibilities such as isentropic efficiencies of the expander and the pump and entropy generation in the regenerator, boiler and condenser are accounted for. The performance of the system in the chosen range of heat source temperatures is evaluated. A technique of identifying the required source temperature for a given output of the plant and the maximum operating temperature of the working fluid is developed. This is based on the pinch point occurrence in the boiler and entropy generation in the boiling and superheating regions of the boiler. It is shown that cycle efficiencies of 10-13% can be obtained in the range investigated at an optimal expansion ratio of 7-10. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper derives outer bounds on the sum rate of the K-user MIMO Gaussian interference channel (GIC). Three outer bounds are derived, under different assumptions of cooperation and providing side information to receivers. The novelty in the derivation lies in the careful selection of side information, which results in the cancellation of the negative differential entropy terms containing signal components, leading to a tractable outer bound. The overall outer bound is obtained by taking the minimum of the three outer bounds. The derived bounds are simplified for the MIMO Gaussian symmetric IC to obtain outer bounds on the generalized degrees of freedom (GDOF). The relative performance of the bounds yields insight into the performance limits of multiuser MIMO GICs and the relative merits of different schemes for interference management. These insights are confirmed by establishing the optimality of the bounds in specific cases using an inner bound on the GDOF derived by the authors in a previous work. It is also shown that many of the existing results on the GDOF of the GIC can be obtained as special cases of the bounds, e. g., by setting K = 2 or the number of antennas at each user to 1.
Resumo:
Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.
Resumo:
Shallow-trench isolation drain extended pMOS (STI-DePMOS) devices show a distinct two-stage breakdown. The impact of p-well and deep-n-well doping profile on breakdown characteristics is investigated based on TCAD simulations. Design guidelines for p-well and deep-n-well doping profile are developed to shift the onset of the first-stage breakdown to a higher drain voltage and to avoid vertical punch-through leading to early breakdown. An optimal ratio between the OFF-state breakdown voltage and the ON-state resistance could be obtained. Furthermore, the impact of p-well/deep-n-well doping profile on the figure of merits of analog and digital performance is studied. This paper aids in the design of STI drain extended MOSFET devices for widest safe operating area and optimal mixed-signal performance in advanced system-on-chip input-output process technologies.
Resumo:
The concept of biosensor with imaging ellipsometry was proposed about ten years ago. It has become an automatic analysis technique for protein detection with merits of label-free, multi-protein analysis, and real-time analysis for protein interaction process, etc. Its principle, andrelated technique units, such as micro-array, micro-fluidic and bio-molecule interaction cell, sampling unit and calibration for quantitative detection as well as its applications in biomedicine field are presented here.
Resumo:
The current situation of regional, rather' than national, problems of eutrophication in standing waters has been widely aired in recent reports. A reliable, quantitative data base is a prerequisite to future trend monitoring, a concensus view of those reports. The objective of this report is to establish requirements, methodology and a minimal data set for nutrient and algae status in water supply reservoirs in England which may be used as a protocol for future trend monitoring.A pilot study has been carried out to assess the relative merits of different sampling strategies, the choice of which has major implications for the cost of sample collection. This short report suggests that consider the possibility of designating a few sites as ”baseline sites” at which detailed changes in trophic status as monitored by the more labour-intensive parameters would be collected on a regular, long term basis to help in the interpretation of the low cost survey results.