983 resultados para Location efficiency
Resumo:
New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
Agency problems within the firm are a significant hindrance to efficiency. We propose trust between coworkers as a superior alternative to the standard tools used to mitigate agency problems: increased monitoring and incentive-based pay. We model trust as mutual, reciprocal altruism between pairs of coworkers and show how it induces employees to work harder, relative to those at firms that use the standard tools. In addition, we show that employees at trusting firms have higher job satisfaction, and that these firms enjoy lower labor cost and higher profits. We conclude by discussing how trust may also be easier to use within the firm than the standard agency-mitigation tools. © 2002 Elsevier Science B.V. All rights reserved.
Resumo:
PURPOSE: The purpose of this work is to improve the noise power spectrum (NPS), and thus the detective quantum efficiency (DQE), of computed radiography (CR) images by correcting for spatial gain variations specific to individual imaging plates. CR devices have not traditionally employed gain-map corrections, unlike the case with flat-panel detectors, because of the multiplicity of plates used with each reader. The lack of gain-map correction has limited the DQE(f) at higher exposures with CR. This current work describes a feasible solution to generating plate-specific gain maps. METHODS: Ten high-exposure open field images were taken with an RQA5 spectrum, using a sixth generation CR plate suspended in air without a cassette. Image values were converted to exposure, the plates registered using fiducial dots on the plate, the ten images averaged, and then high-pass filtered to remove low frequency contributions from field inhomogeneity. A gain-map was then produced by converting all pixel values in the average into fractions with mean of one. The resultant gain-map of the plate was used to normalize subsequent single images to correct for spatial gain fluctuation. To validate performance, the normalized NPS (NNPS) for all images was calculated both with and without the gain-map correction. Variations in the quality of correction due to exposure levels, beam voltage/spectrum, CR reader used, and registration were investigated. RESULTS: The NNPS with plate-specific gain-map correction showed improvement over the noncorrected case over the range of frequencies from 0.15 to 2.5 mm(-1). At high exposure (40 mR), NNPS was 50%-90% better with gain-map correction than without. A small further improvement in NNPS was seen from carefully registering the gain-map with subsequent images using small fiducial dots, because of slight misregistration during scanning. Further improvement was seen in the NNPS from scaling the gain map about the mean to account for different beam spectra. CONCLUSIONS: This study demonstrates that a simple gain-map can be used to correct for the fixed-pattern noise in a given plate and thus improve the DQE of CR imaging. Such a method could easily be implemented by manufacturers because each plate has a unique bar code and the gain-map for all plates associated with a reader could be stored for future retrieval. These experiments indicated that an improvement in NPS (and hence, DQE) is possible, depending on exposure level, over a wide range of frequencies with this technique.
Resumo:
The design of the New York City (NYC) high school match involved trade-offs among efficiency, stability, and strategy-proofness that raise new theoretical questions. We analyze a model with indifferences-ties-in school preferences. Simulations with field data and the theory favor breaking indifferences the same way at every school-single tiebreaking-in a student-proposing deferred acceptance mechanism. Any inefficiency associated with a realized tiebreaking cannot be removed without harming student incentives. Finally, we empirically document the extent of potential efficiency loss associated with strategy-proofness and stability, and direct attention to some open questions. (JEL C78, D82, I21).
Resumo:
It has long been recognized that whistler-mode waves can be trapped in plasmaspheric whistler ducts which guide the waves. For nonguided cases these waves are said to be "nonducted", which is dominant for L < 1.6. Wave-particle interactions are affected by the wave being ducted or nonducted. In the field-aligned ducted case, first-order cyclotron resonance is dominant, whereas nonducted interactions open up a much wider range of energies through equatorial and off-equatorial resonance. There is conflicting information as to whether the most significant particle loss processes are driven by ducted or nonducted waves. In this study we use loss cone observations from the DEMETER and POES low-altitude satellites to focus on electron losses driven by powerful VLF communications transmitters. Both satellites confirm that there are well-defined enhancements in the flux of electrons in the drift loss cone due to ducted transmissions from the powerful transmitter with call sign NWC. Typically, ∼80% of DEMETER nighttime orbits to the east of NWC show electron flux enhancements in the drift loss cone, spanning a L range consistent with first-order cyclotron theory, and inconsistent with nonducted resonances. In contrast, ∼1% or less of nonducted transmissions originate from NPM-generated electron flux enhancements. While the waves originating from these two transmitters have been predicted to lead to similar levels of pitch angle scattering, we find that the enhancements from NPM are at least 50 times smaller than those from NWC. This suggests that lower-latitude, nonducted VLF waves are much less effective in driving radiation belt pitch angle scattering. Copyright 2010 by the American Geophysical Union.
Resumo:
BACKGROUND AND PURPOSE: Previous studies have demonstrated that treatment strategy plays a critical role in ensuring maximum stone fragmentation during shockwave lithotripsy (SWL). We aimed to develop an optimal treatment strategy in SWL to produce maximum stone fragmentation. MATERIALS AND METHODS: Four treatment strategies were evaluated using an in-vitro experimental setup that mimics stone fragmentation in the renal pelvis. Spherical stone phantoms were exposed to 2100 shocks using the Siemens Modularis (electromagnetic) lithotripter. The treatment strategies included increasing output voltage with 100 shocks at 12.3 kV, 400 shocks at 14.8 kV, and 1600 shocks at 15.8 kV, and decreasing output voltage with 1600 shocks at 15.8 kV, 400 shocks at 14.8 kV, and 100 shocks at 12.3 kV. Both increasing and decreasing voltages models were run at a pulse repetition frequency (PRF) of 1 and 2 Hz. Fragmentation efficiency was determined using a sequential sieving method to isolate fragments less than 2 mm. A fiberoptic probe hydrophone was used to characterize the pressure waveforms at different output voltage and frequency settings. In addition, a high-speed camera was used to assess cavitation activity in the lithotripter field that was produced by different treatment strategies. RESULTS: The increasing output voltage strategy at 1 Hz PRF produced the best stone fragmentation efficiency. This result was significantly better than the decreasing voltage strategy at 1 Hz PFR (85.8% vs 80.8%, P=0.017) and over the same strategy at 2 Hz PRF (85.8% vs 79.59%, P=0.0078). CONCLUSIONS: A pretreatment dose of 100 low-voltage output shockwaves (SWs) at 60 SWs/min before increasing to a higher voltage output produces the best overall stone fragmentation in vitro. These findings could lead to increased fragmentation efficiency in vivo and higher success rates clinically.
Resumo:
We analyze technology adoption decisions of manufacturing plants in response to government-sponsored energy audits. Overall, plants adopt about half of the recommended energy-efficiency projects. Using fixed effects logit estimation, we find that adoption rates are higher for projects with shorter paybacks, lower costs, greater annual savings, higher energy prices, and greater energy conservation. Plants are 40% more responsive to initial costs than annual savings, suggesting that subsidies may be more effective at promoting energy-efficient technologies than energy price increases. Adoption decisions imply hurdle rates of 50-100%, which is consistent with the investment criteria small and medium-size firms state they use. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Gemstone Team Vision
Resumo:
We analyze the cost-effectiveness of electric utility ratepayer-funded programs to promote demand-side management (DSM) and energy efficiency (EE) investments. We specify a model that relates electricity demand to previous EE DSM spending, energy prices, income, weather, and other demand factors. In contrast to previous studies, we allow EE DSM spending to have a potential longterm demand effect and explicitly address possible endogeneity in spending. We find that current period EE DSM expenditures reduce electricity demand and that this effect persists for a number of years. Our findings suggest that ratepayer funded DSM expenditures between 1992 and 2006 produced a central estimate of 0.9 percent savings in electricity consumption over that time period and a 1.8 percent savings over all years. These energy savings came at an expected average cost to utilities of roughly 5 cents per kWh saved when future savings are discounted at a 5 percent rate. Copyright © 2012 by the IAEE. All rights reserved.
Resumo:
Gemstone Team HOPE (Hospital Optimal Productivity Enterprise)
Resumo:
We have isolated and sequenced a cDNA encoding the human beta 2-adrenergic receptor. The deduced amino acid sequence (413 residues) is that of a protein containing seven clusters of hydrophobic amino acids suggestive of membrane-spanning domains. While the protein is 87% identical overall with the previously cloned hamster beta 2-adrenergic receptor, the most highly conserved regions are the putative transmembrane helices (95% identical) and cytoplasmic loops (93% identical), suggesting that these regions of the molecule harbor important functional domains. Several of the transmembrane helices also share lesser degrees of identity with comparable regions of select members of the opsin family of visual pigments. We have localized the gene for the beta 2-adrenergic receptor to q31-q32 on chromosome 5. This is the same position recently determined for the gene encoding the receptor for platelet-derived growth factor and is adjacent to that for the FMS protooncogene, which encodes the receptor for the macrophage colony-stimulating factor.
Resumo:
Gemstone Team Grenergy
Resumo:
The growth and proliferation of invasive bacteria in engineered systems is an ongoing problem. While there are a variety of physical and chemical processes to remove and inactivate bacterial pathogens, there are many situations in which these tools are no longer effective or appropriate for the treatment of a microbial target. For example, certain strains of bacteria are becoming resistant to commonly used disinfectants, such as chlorine and UV. Additionally, the overuse of antibiotics has contributed to the spread of antibiotic resistance, and there is concern that wastewater treatment processes are contributing to the spread of antibiotic resistant bacteria.
Due to the continually evolving nature of bacteria, it is difficult to develop methods for universal bacterial control in a wide range of engineered systems, as many of our treatment processes are static in nature. Still, invasive bacteria are present in many natural and engineered systems, where the application of broad acting disinfectants is impractical, because their use may inhibit the original desired bioprocesses. Therefore, to better control the growth of treatment resistant bacteria and to address limitations with the current disinfection processes, novel tools that are both specific and adaptable need to be developed and characterized.
In this dissertation, two possible biological disinfection processes were investigated for use in controlling invasive bacteria in engineered systems. First, antisense gene silencing, which is the specific use of oligonucleotides to silence gene expression, was investigated. This work was followed by the investigation of bacteriophages (phages), which are viruses that are specific to bacteria, in engineered systems.
For the antisense gene silencing work, a computational approach was used to quantify the number of off-targets and to determine the effects of off-targets in prokaryotic organisms. For the organisms of
Regarding the work with phages, the disinfection rates of bacteria in the presence of phages was determined. The disinfection rates of
In addition to determining disinfection rates, the long-term bacterial growth inhibition potential was determined for a variety of phages with both Gram-negative and Gram-positive bacteria. It was determined, that on average, phages can be used to inhibit bacterial growth for up to 24 h, and that this effect was concentration dependent for various phages at specific time points. Additionally, it was found that a phage cocktail was no more effective at inhibiting bacterial growth over the long-term than the best performing phage in isolation.
Finally, for an industrial application, the use of phages to inhibit invasive
In conclusion, this dissertation improved the current methods for designing antisense gene silencing targets for prokaryotic organisms, and characterized phages from an engineering perspective. First, the current design strategy for antisense targets in prokaryotic organisms was improved through the development of an algorithm that minimized the number of off-targets. For the phage work, a framework was developed to predict the disinfection rates in terms of the initial phage and bacterial concentrations. In addition, the long-term bacterial growth inhibition potential of multiple phages was determined for several bacteria. In regard to the phage application, phages were shown to protect both final product yields and yeast concentrations during fermentation. Taken together, this work suggests that the rational design of phage treatment is possible and further work is needed to expand on this foundation.
Resumo:
We conduct the first empirical investigation of common-pool resource users' dynamic and strategic behavior at the micro level using real-world data. Fishermen's strategies in a fully dynamic game account for latent resource dynamics and other players' actions, revealing the profit structure of the fishery. We compare the fishermen's actual and socially optimal exploitation paths under a time-specific vessel allocation policy and find a sizable dynamic externality. Individual fishermen respond to other users by exerting effort above the optimal level early in the season. Congestion is costly instantaneously but is beneficial in the long run because it partially offsets dynamic inefficiencies.