974 resultados para Operational Transconductance Amplifiers (OTAs)
Resumo:
In modern society, energy consumption and respect for the environment have become essential aspects of urban planning. The rising demand for alternative sources of energy, coupled with the decline in the construction sector and material usage, gives the idea that the thinking on modern cities, where attention is given to reduced energy consumption, savings, waste recycling and respect for the surrounding environment, is being put into practice. If we examine development of the city over recent centuries, by means of the theories of the most famous and influential urban planners, it is possible to identify the major problems caused by this type of planning. For this reason, in recent urban planning the use of systems of indicators that evaluate and certify land environmentally and energetically guides the master plan toward a more efficient city model. In addition the indicators are targeted on key factors determined by the commissioner or the opportunities the territory itself provides. Due the complexity of the environmental mechanics, the process of design and urban planning has become a challenging issue. The introduction of the indicators system has made it possible to register the life of the process, with a spiral route that allows the design itself to be refined. The aim of this study, built around the creation of a system of urban sustainability indicators that will evaluate highly eco-friendly cities, is to develop a certification system for cities or portions of them. The system will be upgradeable and objective, will employ real data and will be concerned with energy production and consumption.
Resumo:
Modern fully integrated receiver architectures, require inductorless circuits to achieve their potential low area, low cost, and low power. The low noise amplifier (LNA), which is a key block in such receivers, is investigated in this thesis. LNAs can be either narrowband or wideband. Narrowband LNAs use inductors and have very low noise figure, but they occupy a large area and require a technology with RF options to obtain inductors with high Q. Recently, wideband LNAs with noise and distortion cancelling, with passive loads have been proposed, which can have low NF, but have high power consumption. In this thesis the main goal is to obtain a very low area, low power, and low-cost wideband LNA. First, it is investigated a balun LNA with noise and distortion cancelling with active loads to boost the gain and reduce the noise figure (NF). The circuit is based on a conventional balun LNA with noise and distortion cancellation, using the combination of a common-gate (CG) stage and common-source (CS) stage. Simulation and measurements results, with a 130 nm CMOS technology, show that the gain is enhanced by about 3 dB and the NF is reduced by at least 0.5 dB, with a negligible impact on the circuit linearity (IIP3 is about 0 dBm). The total power dissipation is only 4.8 mW, and the active area is less than 50 x 50 m2 . It is also investigated a balun LNA in which the gain is boosted by using a double feedback structure.We propose to replace the load resistors by active loads, which can be used to implement local feedback loops (in the CG and CS stages). This will boost the gain and reduce the noise figure (NF). Simulation results, with the same 130 nm CMOS technology as above, show that the gain is 24 dB and NF is less than 2.7 dB. The total power dissipation is only 5.4 mW (since no extra blocks are required), leading to a figure-of-merit (FoM) of 3.8 mW
Resumo:
Dissertação de mestrado em Bioengenharia
The appraisal of anaerobic digestion in Ireland to develop improved designs and operational practice
Resumo:
Mesophilic Anaerobic Digestion treating sewage sludge was investigated at five full-scale sewage treatment plants in Ireland. The anaerobic digestion plants are compared and evaluated in terms of design, equipment, operation, monitoring and management. All digesters are cylindrical, gas mixed and heated Continuously Stirred Tank Reactors (CSTR), varying in size from 130m3 to 800m3. Heat exchanger systems heat all digesters. Three plants reported difficulties with the heating systems ranging from blockages to insufficient insulation and design. Exchangers were modified and replaced within one year of operation at two plants. All but one plant had Combined Heat and Power (CHP) systems installed. Parameter monitoring is a problem at all plants mainly due to a lack of staff and knowledge. The plant operators consider pH and temperature the most important parameters to be measured in terms of successful monitoring of an anaerobic digester. The short time taken and the ease at which pH and temperature can be measured may favour these parameters. Three laboratory scale pilot anaerobic digesters were operated using a variety of feeds over at 144-day period. Two of the pilots were unmixed and the third was mechanically mixed. As expected the unmixed reactors removed more COD by retention of solids in the digesters but also produced greater quantities of biogas than the mixed digester, especially when low solids feed such as whey was used. The mixed digester broke down more solids due to the superior contact between the substrate and the biomass. All three reactors showed good performance results for whey and sewage solids. Scum formation occurred giving operational problems for mixed and unmixed reactors when cattle slurry was used as the main feed source. The pilot test was also used to investigate which parameters were the best indicators of process instability. These trials clearly indicated that total Volatile Fatty Acid (VFA) concentrations was the best parameter to show signs of early process imbalance, while methane composition in the biogas was good to indicate possible nutrient deficiencies in the feed and oxygen shocks. pH was found to be a good process parameter only if the wastewater being treated produced low bicarbonate alkalinities during treatment.
Resumo:
The World Health Organization (WHO) criteria for the diagnosis of osteoporosis are mainly applicable for dual X-ray absorptiometry (DXA) measurements at the spine and hip levels. There is a growing demand for cheaper devices, free of ionizing radiation such as promising quantitative ultrasound (QUS). In common with many other countries, QUS measurements are increasingly used in Switzerland without adequate clinical guidelines. The T-score approach developed for DXA cannot be applied to QUS, although well-conducted prospective studies have shown that ultrasound could be a valuable predictor of fracture risk. As a consequence, an expert committee named the Swiss Quality Assurance Project (SQAP, for which the main mission is the establishment of quality assurance procedures for DXA and QUS in Switzerland) was mandated by the Swiss Association Against Osteoporosis (ASCO) in 2000 to propose operational clinical recommendations for the use of QUS in the management of osteoporosis for two QUS devices sold in Switzerland. Device-specific weighted "T-score" based on the risk of osteoporotic hip fractures as well as on the prediction of DXA osteoporosis at the hip, according to the WHO definition of osteoporosis, were calculated for the Achilles (Lunar, General Electric, Madison, Wis.) and Sahara (Hologic, Waltham, Mass.) ultrasound devices. Several studies (totaling a few thousand subjects) were used to calculate age-adjusted odd ratios (OR) and area under the receiver operating curve (AUC) for the prediction of osteoporotic fracture (taking into account a weighting score depending on the design of the study involved in the calculation). The ORs were 2.4 (1.9-3.2) and AUC 0.72 (0.66-0.77), respectively, for the Achilles, and 2.3 (1.7-3.1) and 0.75 (0.68-0.82), respectively, for the Sahara device. To translate risk estimates into thresholds for clinical application, 90% sensitivity was used to define low fracture and low osteoporosis risk, and a specificity of 80% was used to define subjects as being at high risk of fracture or having osteoporosis at the hip. From the combination of the fracture model with the hip DXA osteoporotic model, we found a T-score threshold of -1.2 and -2.5 for the stiffness (Achilles) determining, respectively, the low- and high-risk subjects. Similarly, we found a T-score at -1.0 and -2.2 for the QUI index (Sahara). Then a screening strategy combining QUS, DXA, and clinical factors for the identification of women needing treatment was proposed. The application of this approach will help to minimize the inappropriate use of QUS from which the whole field currently suffers.
Resumo:
Executive Summary
Resumo:
The study assessed the operational feasibility and acceptability of insecticide-treated mosquito nets (ITNs) in one Primary Health Centre (PHC) in a falciparum malaria endemic district in the state of Orissa, India, where 74% of the people are tribes and DDT indoor residual spraying had been withdrawn and ITNs introduced by the National Vector Borne Disease Control Programme. To a population of 63,920, 24,442 ITNs were distributed free of charge through 101 treatment centers during July-August 2002. Interview of 1,130, 1,012 and 126 respondents showed that the net use rates were 80%, 74% and 55% in the cold, rainy and summer seasons, respectively. Since using ITNs, 74.5-76.6% of the respondents observed reduction of mosquito bites and 7.2-32.1% reduction of malaria incidence; 37% expressed willingness to buy ITNs if the cost was lower and they were affordable. Up to ten months post-treatment, almost 100% mortality of vector mosquitoes was recorded on unwashed and washed nets (once or twice). Health workers re-treated the nets at the treatment centers eight months after distribution on a cost-recovery basis. The coverage reported by the PHC was only 4.2%, mainly because of unwillingness of the people to pay for re-treatment and to go to the treatment centers from their villages. When the re-treatment was continued at the villages involving personnel from several departments, the coverage improved to about 90%.Interview of 126 respondents showed that among those who got their nets re-treated, 81.4% paid cash for the re-treatment and the remainder were reluctant to pay. Majority of those who paid said that they did so due to the fear that if they did not do so they would lose benefits from other government welfare schemes. The 2nd re-treatment was therefore carried out free of charge nine months after the 1st re-treatment and thus achieved coverage of 70.4%. The study showed community acceptance to use ITNs as they perceived the benefit. Distribution and re-treatment of nets was thus possible through the PHC system, if done free of charge and when personnel from different departments, especially those at village level, were involved.
Resumo:
ABSTRACT This paper provides evidence on the market reaction to corporate investment decisions whose shareholder value is largely attributed to growth options. The exploratory research raised pre-operational companies and their operational pairs on the same economy segments. It had the purpose of investigating the existence of statistical differentiation from financial indicators that reflect the installed assets and growth assets, and then study the market reaction to changes in fixed assets as a signaling element about investment decisions. The formation process of operational assets and shareholder value almost exclusively dependent on asset growth stands out in the pre-operational companies. As a result, differentiation tests confirmed that the pre-operational companies had their value especially derived on growth options. The market reaction was particularly bigger in pre-operational companies with abnormal negative stock returns, while the operational companies had positive returns, which may indicate that the quality of the investment is judged based on the financial disclosure. Additionally, operational companies' investors await the disclosure to adjust their prices. We conclude that the results are consistent with the empirical evidence and the participants in financial markets to long-term capital formation investments should give that special attention.
Resumo:
Theory predicts that males adapt to sperm competition by increasing their investment in testis mass to transfer larger ejaculates. Experimental and comparative data support this prediction. Nevertheless, the relative importance of sperm competition in testis size evolution remains elusive, because experiments vary only sperm competition whereas comparative approaches confound it with other variables, in particular male mating rate. We addressed the relative importance of sperm competition and male mating rate by taking an experimental evolution approach. We subjected populations of Drosophila melanogaster to sex ratios of 1:1, 4:1, and 10:1 (female:male). Female bias decreased sperm competition but increased male mating rate and sperm depletion. After 28 generations of evolution, males from the 10:1 treatment had larger testes than males from other treatments. Thus, testis size evolved in response to mating rate and sperm depletion, not sperm competition. Furthermore, our experiment demonstrated that drift associated with sex ratio distortion limits adaptation; testis size only evolved in populations in which the effect of sex ratio bias on the effective population size had been compensated by increasing the numerical size. We discuss these results with respect to reproductive evolution, genetic drift in natural and experimental populations, and consequences of natural sex ratio distortion.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The planning effort for ISP began in 2006 when the IDOC retained the Durrant/PBA team of architects and planners to review the Iowa correctional system. The team conducted two studies in the following two years, the first being the April 2007 Iowa Department of Corrections Systemic Master Plan. Both studies addressed myriad aspects of the correctional system including treatment and re-entry needs and programs, security and training, and staffing.
Resumo:
Excessive daytime sleepiness underpins a large number of the reported motor vehicle crashes. Fair and accurate field measures are needed to identify at-risk drivers who have been identified as potentially driving in a sleep deprived state on the basis of erratic driving behavior. The purpose of this research study was to evaluate a set of cognitive tests that can assist Motor Vehicle Enforcement Officers on duty in identifying drivers who may be engaged in sleep impaired driving. Currently no gold standard test exists to judge sleepiness in the field. Previous research has shown that Psychomotor Vigilance Task (PVT) is sensitive to sleep deprivation. The first goal of the current study was to evaluate whether computerized tests of attention and memory, more brief than PVT, would be as sensitive to sleepiness effects. The second goal of the study was to evaluate whether objective and subjective indices of acute and cumulative sleepiness predicted cognitive performance. Findings showed that sleepiness effects were detected in three out of six tasks. Furthermore, PVT was the only task that showed a consistent slowing of both ‘best’, i.e. minimum, and ‘typical’ responses, median RT due to sleepiness. However, PVT failed to show significant associations with objective measures of sleep deprivation (number of hours awake). The findings indicate that sleepiness tests in the field have significant limitations. The findings clearly show that it will not be possible to set absolute performance thresholds to identify sleep-impaired drivers based on cognitive performance on any test. Cooperation with industry to adjust work and rest cycles, and incentives to comply with those regulations will be critical components of a broad policy to prevent sleepy truck drivers from getting on the road.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
Research projects aimed at proposing fingerprint statistical models based on the likelihood ratio framework have shown that low quality finger impressions left on crime scenes may have significant evidential value. These impressions are currently either not recovered, considered to be of no value when first analyzed by fingerprint examiners, or lead to inconclusive results when compared to control prints. There are growing concerns within the fingerprint community that recovering and examining these low quality impressions will result in a significant increase of the workload of fingerprint units and ultimately of the number of backlogged cases. This study was designed to measure the number of impressions currently not recovered or not considered for examination, and to assess the usefulness of these impressions in terms of the number of additional detections that would result from their examination.