942 resultados para unprofessional conduct
Resumo:
An apparatus in the direct shear mode has been developed to conduct soil-soil and soil-solid material interface tests in the undrained condition. Evaluation of the apparatus showed that all the requirements for simulating the undrained condition of shear are satisfied. The interface test results show that the adhesion factor a increases with the surface roughness of the solid material. In the case of the normally consolidated state, alpha is practically independent of the undrained shear strength of the clay for a given surface. For the overconsolidated state, alpha depends on the undrained shear strength and the overconsolidation ratio for smooth surfaces but for rough surfaces; alpha is independent of both undrained shear strength and overconsolidation ratio.
Resumo:
We propose a method for the dynamic simulation of a collection of self-propelled particles in a viscous Newtonian fluid. We restrict attention to particles whose size and velocity are small enough that the fluid motion is in the creeping flow regime. We propose a simple model for a self-propelled particle, and extended the Stokesian Dynamics method to conduct dynamic simulations of a collection of such particles. In our description, each particle is treated as a sphere with an orientation vector p, whose locomotion is driven by the action of a force dipole Sp of constant magnitude S0 at a point slightly displaced from its centre. To simplify the calculation, we place the dipole at the centre of the particle, and introduce a virtual propulsion force Fp to effect propulsion. The magnitude F0 of this force is proportional to S0. The directions of Sp and Fp are determined by p. In isolation, a self-propelled particle moves at a constant velocity u0 p, with the speed u0 determined by S0. When it coexists with many such particles, its hydrodynamic interaction with the other particles alters its velocity and, more importantly, its orientation. As a result, the motion of the particle is chaotic. Our simulations are not restricted to low particle concentration, as we implement the full hydrodynamic interactions between the particles, but we restrict the motion of particles to two dimensions to reduce computation. We have studied the statistical properties of a suspension of self-propelled particles for a range of the particle concentration, quantified by the area fraction φa. We find several interesting features in the microstructure and statistics. We find that particles tend to swim in clusters wherein they are in close proximity. Consequently, incorporating the finite size of the particles and the near-field hydrodynamic interactions is of the essence. There is a continuous process of breakage and formation of the clusters. We find that the distributions of particle velocity at low and high φa are qualitatively different; it is close to the normal distribution at high φa, in agreement with experimental measurements. The motion of the particles is diffusive at long time, and the self-diffusivity decreases with increasing φa. The pair correlation function shows a large anisotropic build-up near contact, which decays rapidly with separation. There is also an anisotropic orientation correlation near contact, which decays more slowly with separation. Movies are available with the online version of the paper.
Resumo:
We have analysed the diurnal cycle of rainfall over the Indian region (10S-35N, 60E-100E) using both satellite and in-situ data, and found many interesting features associated with this fundamental, yet under-explored, mode of variability. Since there is a distinct and strong diurnal mode of variability associated with the Indian summer monsoon rainfall, we evaluate the ability of the Weather Research and Forecasting Model (WRF) to simulate the observed diurnal rainfall characteristics. The model (at 54km grid-spacing) is integrated for the month of July, 2006, since this period was particularly favourable for the study of diurnal cycle. We first evaluate the sensitivity of the model to the prescribed sea surface temperature (SST), by using two different SST datasets, namely, Final Analyses (FNL) and Real-time Global (RTG). It was found that with RTG SST the rainfall simulation over central India (CI) was significantly better than that with FNL. On the other hand, over the Bay of Bengal (BoB), rainfall simulated with FNL was marginally better than with RTG. However, the overall performance of RTG SST was found to be better than FNL, and hence it was used for further model simulations. Next, we investigated the role of the convective parameterization scheme on the simulation of diurnal cycle of rainfall. We found that the Kain-Fritsch (KF) scheme performs significantly better than Betts-Miller-Janjić (BMJ) and Grell-Devenyi schemes. We also studied the impact of other physical parameterizations, namely, microphysics, boundary layer, land surface, and the radiation parameterization, on the simulation of diurnal cycle of rainfall, and identified the “best” model configuration. We used this configuration of the “best” model to perform a sensitivity study on the role of various convective components used in the KF scheme. In particular, we studied the role of convective downdrafts, convective timescale, and feedback fraction, on the simulated diurnal cycle of rainfall. The “best” model simulations, in general, show a good agreement with observations. Specifically, (i) Over CI, the simulated diurnal rainfall peak is at 1430 IST, in comparison to the observed 1430-1730 IST peak; (ii) Over Western Ghats and Burmese mountains, the model simulates a diurnal rainfall peak at 1430 IST, as opposed to the observed peak of 1430-1730 IST; (iii) Over Sumatra, both model and observations show a diurnal peak at 1730 IST; (iv) The observed southward propagating diurnal rainfall bands over BoB are weakly simulated by WRF. Besides the diurnal cycle of rainfall, the mean spatial pattern of total rainfall and its partitioning between the convective and stratiform components, are also well simulated. The “best” model configuration was used to conduct two nested simulations with one-way, three-level nesting (54-18-6km) over CI and BoB. While, the 54km and 18km simulations were conducted for the whole of July, 2006, the 6km simulation was carried out for the period 18 - 24 July, 2006. The results of our coarse- and fine-scale numerical simulations of the diurnal cycle of monsoon rainfall will be discussed.
Resumo:
Spatial Decision Support System (SDSS) assist in strategic decision-making activities considering spatial and temporal variables, which help in Regional planning. WEPA is a SDSS designed for assessment of wind potential spatially. A wind energy system transforms the kinetic energy of the wind into mechanical or electrical energy that can be harnessed for practical use. Wind energy can diversify the economies of rural communities, adding to the tax base and providing new types of income. Wind turbines can add a new source of property value in rural areas that have a hard time attracting new industry. Wind speed is extremely important parameter for assessing the amount of energy a wind turbine can convert to electricity: The energy content of the wind varies with the cube (the third power) of the average wind speed. Estimation of the wind power potential for a site is the most important requirement for selecting a site for the installation of a wind electric generator and evaluating projects in economic terms. It is based on data of the wind frequency distribution at the site, which are collected from a meteorological mast consisting of wind anemometer and a wind vane and spatial parameters (like area available for setting up wind farm, landscape, etc.). The wind resource is governed by the climatology of the region concerned and has large variability with reference to space (spatial expanse) and time (season) at any fixed location. Hence the need to conduct wind resource surveys and spatial analysis constitute vital components in programs for exploiting wind energy. SDSS for assessing wind potential of a region / location is designed with user friendly GUI’s (Graphic User Interface) using VB as front end with MS Access database (backend). Validation and pilot testing of WEPA SDSS has been done with the data collected for 45 locations in Karnataka based on primary data at selected locations and data collected from the meteorological observatories of the India Meteorological Department (IMD). Wind energy and its characteristics have been analysed for these locations to generate user-friendly reports and spatial maps. Energy Pattern Factor (EPF) and Power Densities are computed for sites with hourly wind data. With the knowledge of EPF and mean wind speed, mean power density is computed for the locations with only monthly data. Wind energy conversion systems would be most effective in these locations during May to August. The analyses show that coastal and dry arid zones in Karnataka have good wind potential, which if exploited would help local industries, coconut and areca plantations, and agriculture. Pre-monsoon availability of wind energy would help in irrigating these orchards, making wind energy a desirable alternative.
Resumo:
This paper proposes a Petri net model for a commercial network processor (Intel iXP architecture) which is a multithreaded multiprocessor architecture. We consider and model three different applications viz., IPv4 forwarding, network address translation, and IP security running on IXP 2400/2850. A salient feature of the Petri net model is its ability to model the application, architecture and their interaction in great detail. The model is validated using the Intel proprietary tool (SDK 3.51 for IXP architecture) over a range of configurations. We conduct a detailed performance evaluation, identify the bottleneck resource, and propose a few architectural extensions and evaluate them in detail.
Resumo:
There is a lot of pressure on all the developed and second world countries to produce low emission power and distributed generation (DG) is found to be one of the most viable ways to achieve this. DG generally makes use of renewable energy sources like wind, micro turbines, photovoltaic, etc., which produce power with minimum green house gas emissions. While installing a DG it is important to define its size and optimal location enabling minimum network expansion and line losses. In this paper, a methodology to locate the optimal site for a DG installation, with the objective to minimize the net transmission losses, is presented. The methodology is based on the concept of relative electrical distance (RED) between the DG and the load points. This approach will help to identify the new DG location(s), without the necessity to conduct repeated power flows. To validate this methodology case studies are carried out on a 20 node, 66kV system, a part of Karnataka Transco and results are presented.
Resumo:
In the presence of a synthetic non-Abelian gauge field that produces a Rashba-like spin-orbit interaction, a collection of weakly interacting fermions undergoes a crossover from a Bardeen-Cooper-Schrieffer (BCS) ground state to a Bose-Einstein condensate (BEC) ground state when the strength of the gauge field is increased (Vyasanakere et al 2011 Phys. Rev. B 84 014512). The BEC that is obtained at large gauge coupling strengths is a condensate of tightly bound bosonic fermion pairs. The properties of these bosons are solely determined by the Rashba gauge field-hence called rashbons. In this paper, we conduct a systematic study of the properties of rashbons and their dispersion. This study reveals a new qualitative aspect of the problem of interacting fermions in non-Abelian gauge fields, i.e. that the rashbon state ceases to exist when the center-of-mass momentum of the fermions exceeds a critical value that is of the order of the gauge coupling strength. The study allows us to estimate the transition temperature of the rashbon BEC and suggests a route to enhance the exponentially small transition temperature of the system with a fixed weak attraction to the order of the Fermi temperature by tuning the strength of the non-Abelian gauge field. The nature of the rashbon dispersion, and in particular the absence of the rashbon states at large momenta, suggests a regime in parameter space where the normal state of the system will be a dynamical mixture of uncondensed rashbons and unpaired helical fermions. Such a state should show many novel features including pseudogap physics.
Resumo:
The influence of geometric parameters, such as blade profile and hub geometry on axial flow turbines for micro hydro application remains poorly characterized. This paper first introduces a holistic theoretical model for studying the hydraulic phenomenon resulting from geometric modification to the blades. It then describes modification carried out on two runner stages, of which one has untwisted blades and the other has twisted blades obtained by modifying the inlet hub. The experimental results showed that the performance of the untwisted blade runner was satisfactory with a maximum efficiency of 68%. However, positive effects of twisted blades were clearly evident with an efficiency rise of more than 2%. This study also looks into the possible limitations of the model and suggests the extension of the experimental work and the use of computational tools to conduct a progressive validation of all experimental findings, especially on the flow physics within the hub region and the slip phenomena. The paper finally underlines the importance of developing a standardization philosophy for axial flow turbines specific for micro hydro requirements. DOI:10.1061/(ASCE)EY.1943-7897.0000060. (C) 2012 American Society of Civil Engineers.
Resumo:
In this paper, we develop a game theoretic approach for clustering features in a learning problem. Feature clustering can serve as an important preprocessing step in many problems such as feature selection, dimensionality reduction, etc. In this approach, we view features as rational players of a coalitional game where they form coalitions (or clusters) among themselves in order to maximize their individual payoffs. We show how Nash Stable Partition (NSP), a well known concept in the coalitional game theory, provides a natural way of clustering features. Through this approach, one can obtain some desirable properties of the clusters by choosing appropriate payoff functions. For a small number of features, the NSP based clustering can be found by solving an integer linear program (ILP). However, for large number of features, the ILP based approach does not scale well and hence we propose a hierarchical approach. Interestingly, a key result that we prove on the equivalence between a k-size NSP of a coalitional game and minimum k-cut of an appropriately constructed graph comes in handy for large scale problems. In this paper, we use feature selection problem (in a classification setting) as a running example to illustrate our approach. We conduct experiments to illustrate the efficacy of our approach.
Resumo:
Before installation, a voltage source converter is usually subjected to heat-run test to verify its thermal design and performance under load. For heat-run test, the converter needs to be operated at rated voltage and rated current for a substantial length of time. Hence, such tests consume huge amount of energy in case of high-power converters. Also, the capacities of the source and loads available in the research and development (R&D) centre or the production facility could be inadequate to conduct such tests. This paper proposes a method to conduct heat-run tests on high-power, pulse width modulated (PWM) converters with low energy consumption. The experimental set-up consists of the converter under test and another converter (of similar or higher rating), both connected in parallel on the ac side and open on the dc side. Vector-control or synchronous reference frame control is employed to control the converters such that one draws certain amount of reactive power and the other supplies the same; only the system losses are drawn from the mains. The performance of the controller is validated through simulation and experiments. Experimental results, pertaining to heat-run tests on a high-power PWM converter, are presented at power levels of 25 kVA to 150 kVA.
Resumo:
1. The relationship between species richness and ecosystem function, as measured by productivity or biomass, is of long-standing theoretical and practical interest in ecology. This is especially true for forests, which represent a majority of global biomass, productivity and biodiversity. 2. Here, we conduct an analysis of relationships between tree species richness, biomass and productivity in 25 forest plots of area 8-50ha from across the world. The data were collected using standardized protocols, obviating the need to correct for methodological differences that plague many studies on this topic. 3. We found that at very small spatial grains (0.04ha) species richness was generally positively related to productivity and biomass within plots, with a doubling of species richness corresponding to an average 48% increase in productivity and 53% increase in biomass. At larger spatial grains (0.25ha, 1ha), results were mixed, with negative relationships becoming more common. The results were qualitatively similar but much weaker when we controlled for stem density: at the 0.04ha spatial grain, a doubling of species richness corresponded to a 5% increase in productivity and 7% increase in biomass. Productivity and biomass were themselves almost always positively related at all spatial grains. 4. Synthesis. This is the first cross-site study of the effect of tree species richness on forest biomass and productivity that systematically varies spatial grain within a controlled methodology. The scale-dependent results are consistent with theoretical models in which sampling effects and niche complementarity dominate at small scales, while environmental gradients drive patterns at large scales. Our study shows that the relationship of tree species richness with biomass and productivity changes qualitatively when moving from scales typical of forest surveys (0.04ha) to slightly larger scales (0.25 and 1ha). This needs to be recognized in forest conservation policy and management.
Resumo:
The presence of software bloat in large flexible software systems can hurt energy efficiency. However, identifying and mitigating bloat is fairly effort intensive. To enable such efforts to be directed where there is a substantial potential for energy savings, we investigate the impact of bloat on power consumption under different situations. We conduct the first systematic experimental study of the joint power-performance implications of bloat across a range of hardware and software configurations on modern server platforms. The study employs controlled experiments to expose different effects of a common type of Java runtime bloat, excess temporary objects, in the context of the SPECPower_ssj2008 workload. We introduce the notion of equi-performance power reduction to characterize the impact, in addition to peak power comparisons. The results show a wide variation in energy savings from bloat reduction across these configurations. Energy efficiency benefits at peak performance tend to be most pronounced when bloat affects a performance bottleneck and non-bloated resources have low energy-proportionality. Equi-performance power savings are highest when bloated resources have a high degree of energy proportionality. We develop an analytical model that establishes a general relation between resource pressure caused by bloat and its energy efficiency impact under different conditions of resource bottlenecks and energy proportionality. Applying the model to different "what-if" scenarios, we predict the impact of bloat reduction and corroborate these predictions with empirical observations. Our work shows that the prevalent software-only view of bloat is inadequate for assessing its power-performance impact and instead provides a full systems approach for reasoning about its implications.
Resumo:
In this work, possibility of simulating biological organs in realtime using the Boundary Element Method (BEM) is investigated. Biological organs are assumed to follow linear elastostatic material behavior, and constant boundary element is the element type used. First, a Graphics Processing Unit (GPU) is used to speed up the BEM computations to achieve the realtime performance. Next, instead of the GPU, a computer cluster is used. Results indicate that BEM is fast enough to provide for realtime graphics if biological organs are assumed to follow linear elastostatic material behavior. Although the present work does not conduct any simulation using nonlinear material models, results from using the linear elastostatic material model imply that it would be difficult to obtain realtime performance if highly nonlinear material models that properly characterize biological organs are used. Although the use of BEM for the simulation of biological organs is not new, the results presented in the present study are not found elsewhere in the literature.
Resumo:
We investigate the impact of the nucleation law for nucleation on Al-Ti-B inoculant particles, of the motion of inoculant particles and of the motion of grains on the predicted macrosegregation and microstructure in a grain-refined Al-22 wt.% Cu alloy casting. We conduct the study by numerical simulations of a casting experiment in a side-cooled 76×76×254 mm sand mould. Macrosegregation and microstructure formation are studied with a volume-averaged two-phase model accounting for macroscopic heat and solute transport, melt convection, and transport of inoculant particles and equiaxed grains. On the microscopic scale it accounts for nucleation on inoculant particles with a given size distribution (and corresponding activation undercooling distribution)and for the growth of globular solid grains. The growth kinetics is described by accounting for limited solute diffusion in both liquid and solid phases and for convective effects. We show that the consideration of a size distribution of the inoculants has a strong impact on the microstructure(final grain size) prediction. The transport of inoculants significantly increases the microstructure heterogeneities and the grain motion refines the microstructure and reduces the microstructure heterogeneities.
Resumo:
With the progress in modern technological research, novel biomaterials are being largely developed for various biomedical applications. Over the past two decades, most of the research focuses on the development of a new generation of bioceramics as substitutes for hard tissue replacement. In reference to their application in different anatomical locations of a patient, newly developed bioceramic materials can potentially induce a toxic/harmful effect to the host tissues. Therefore, prior to clinical testing, relevant biochemical screening assays are to be performed at the cellular and molecular level, to address the issues of biocompatibility and long term performance of the implants. Along with testing strategies in the bulk material toxicity, a detailed evaluation should also be conducted to determine the toxicity of the wear products of the potential bioceramics. This is important as the bioceramics are intended to be implanted in patients with longer life expectancy and notwithstanding, the material will eventually release finer (mostly nanosized) sized debris particles due to continuous wear at articulating surfaces in the hostile corrosive environment of the human body. The wear particulates generated from a biocompatible bioceramic may act in a different way, inducing early/late aseptic loosening at the implant site, resulting in osteolysis and inflammation. Hence, a study on the chronic effects of the wear particulates, in terms of local and systemic toxicity becomes the major criteria in the toxicity evaluation of implantable bioceramics. In this broad perspective, this article summarizes some of the currently used techniques and knowledge in assessing the in vitro and in vivo cytotoxicity and genotoxicity of bioceramic implant materials. It also addresses the need to conduct a broad evaluation before claiming the biocompatibility and clinical feasibility of any new biomaterial. This review also emphasizes some of the case studies based on the experimental designs that are currently followed and its importance in the context of clinical applications.