940 resultados para Primary energy source uncertainty
Resumo:
The federal government is aggressively promoting biofuels as an answer to global climate change and dependence on imported sources of energy. Iowa has quickly become a leader in the bioeconomy and wind energy production, but meeting the United States Department of Energy’s goal having 20% of U.S. transportation fuels come from biologically based sources by 2030 will require a dramatic increase in ethanol and biodiesel production and distribution. At the same time, much of Iowa’s rural transportation infrastructure is near or beyond its original design life. As Iowa’s rural roadway structures, pavements, and unpaved roadways become structurally deficient or functionally obsolete, public sector maintenance and rehabilitation costs rapidly increase. More importantly, costs to move all farm products will rapidly increase if infrastructure components are allowed to fail; longer hauls, slower turnaround times, and smaller loads result. When these results occur on a large scale, Iowa will start to lose its economic competitive edge in the rapidly developing bioeconomy. The primary objective of this study was to document the current physical and fiscal impacts of Iowa’s existing biofuels and wind power industries. A four-county cluster in north-central Iowa and a two-county cluster in southeast Iowa were identified through a local agency survey as having a large number of diverse facilities and were selected for the traffic and physical impact analysis. The research team investigated the large truck traffic patterns on Iowa’s secondary and local roads from 2002 to 2008 and associated those with the pavement condition and county maintenance expenditures. The impacts were quantified to the extent possible and visualized using geographic information system (GIS) tools. In addition, a traffic and fiscal assessment tool was developed to understand the impact of the development of the biofuels on Iowa’s secondary road system. Recommended changes in public policies relating to the local government and to the administration of those policies included standardizing the reporting and format of all county expenditures, conducting regular pavement evaluations on a county’s system, cooperating and communicating with cities (adjacent to a plant site), considering utilization of tax increment financing (TIF) districts as a short-term tool to produce revenues, and considering alternative ways to tax the industry.
Resumo:
The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the 'efficiency' with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty.
Resumo:
The Vertical Clearance Log is prepared for the purpose of providing vertical clearance restrictions by route on the primary road system. This report is used by the Iowa Department of Transportation’s Motor Carrier Services to route oversize vehicles around structures with vertical restrictions too low for the cargo height. The source of the data is the Geographic Information Management System (GIMS) that is managed by the Office of Research & Analytics in the Performance & Technology Division. The data is collected by inspection crews and through the use of LiDAR technology to reflect changes to structures on the primary road system. This log is produced annually.
Resumo:
This paper addresses the surprising lack of quality control on the analysis and selection on energy policies observable in the last decades. As an example, we discuss the delusional idea that it is possible to replace fossil energy with large scale ethanol production from agricultural crops. But if large scale ethanol production is not practical in energetic terms, why huge amount of money has been invested in it and is it still being invested? In order to answer this question we introduce two concepts useful to frame, in general terms, the predicament of quality control in science: (i) the concept of “granfalloons” proposed by K. Vonnegut (1963) flagging the danger of the formation of “crusades to save the world” void of real meaning. These granfalloons are often used by powerful lobbies to distort policy decisions; and (ii) the concept of Post-Normal science by S. Funtowicz and J. Ravetz (1990) indicating a standard predicament faced by science when producing information for governance. When mixing together uncertainty, multiple-scale and legitimate but contrasting views it becomes impossible to deal with complex issue using the conventional scientific approach based on reductionism. We finally discuss the implications of a different approach to the assessment of alternative energy sources by introducing the concept of Promethean technology.
Resumo:
Background: Primary care physicians are often requested to assess their patients' fitness to drive. Little is however known on their needs to help them in this task. Aims: The aim of this study is to develop theories on needs, expectations, and barriers for clinical instruments helping physicians assess fitness to drive in primary care. Methods: This qualitative study used semi-structured interviews to investigate needs and expectations for instruments used to assess fitness to drive. From August 2011 to April 2013, we recorded opinions from five experts in traffic medicine, five primary care physicians, and five senior drivers. All interviews were integrally transcribed. Two independent researchers extracted, coded, and stratified categories relying on multi-grounded theory. All participants validated the final scheme. Results: Our theory suggests that for an instruments assessing fitness to drive to be implemented in primary care, it need to contribute to the decisional process. This requires at least five conditions: 1) it needs to reduce the range of uncertainty, 2) it needs to be adapted to local resources and possibilities, 3) it needs to be accepted by patients, 4) choices of tasks need to adaptable to clinical conditions, 5) and interpretation of results need to remain dependant of each patient's context. Discussion and conclusions: Most existing instruments assessing fitness to drive are not designed for primary care settings. Future instruments should also aim to support patient-centred dialogue, help anticipate driving cessation, and offer patients the opportunity to freely take their own decision on driving cessation as often as possible.
Resumo:
Context: In the milder form of primary hyperparathyroidism (PHPT), cancellous bone, represented by areal bone mineral density at the lumbar spine by dual-energy x-ray absorptiometry (DXA), is preserved. This finding is in contrast to high-resolution peripheral quantitative computed tomography (HRpQCT) results of abnormal trabecular microstructure and epidemiological evidence for increased overall fracture risk in PHPT. Because DXA does not directly measure trabecular bone and HRpQCT is not widely available, we used trabecular bone score (TBS), a novel gray-level textural analysis applied to spine DXA images, to estimate indirectly trabecular microarchitecture. Objective: The purpose of this study was to assess TBS from spine DXA images in relation to HRpQCT indices and bone stiffness in radius and tibia in PHPT. Design and Setting: This was a cross-sectional study conducted in a referral center. Patients: Participants were 22 postmenopausal women with PHPT. Main Outcome Measures: Outcomes measured were areal bone mineral density by DXA, TBS indices derived from DXA images, HRpQCT standard measures, and bone stiffness assessed by finite element analysis at distal radius and tibia. Results: TBS in PHPT was low at 1.24, representing abnormal trabecular microstructure (normal ≥1.35). TBS was correlated with whole bone stiffness and all HRpQCT indices, except for trabecular thickness and trabecular stiffness at the radius. At the tibia, correlations were observed between TBS and volumetric densities, cortical thickness, trabecular bone volume, and whole bone stiffness. TBS correlated with all indices of trabecular microarchitecture, except trabecular thickness, after adjustment for body weight. Conclusion: TBS, a measurement technology readily available by DXA, shows promise in the clinical assessment of trabecular microstructure in PHPT.
Resumo:
BACKGROUND: Enteral nutrition (EN) is recommended for patients in the intensive-care unit (ICU), but it does not consistently achieve nutritional goals. We assessed whether delivery of 100% of the energy target from days 4 to 8 in the ICU with EN plus supplemental parenteral nutrition (SPN) could optimise clinical outcome. METHODS: This randomised controlled trial was undertaken in two centres in Switzerland. We enrolled patients on day 3 of admission to the ICU who had received less than 60% of their energy target from EN, were expected to stay for longer than 5 days, and to survive for longer than 7 days. We calculated energy targets with indirect calorimetry on day 3, or if not possible, set targets as 25 and 30 kcal per kg of ideal bodyweight a day for women and men, respectively. Patients were randomly assigned (1:1) by a computer-generated randomisation sequence to receive EN or SPN. The primary outcome was occurrence of nosocomial infection after cessation of intervention (day 8), measured until end of follow-up (day 28), analysed by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT00802503. FINDINGS: We randomly assigned 153 patients to SPN and 152 to EN. 30 patients discontinued before the study end. Mean energy delivery between day 4 and 8 was 28 kcal/kg per day (SD 5) for the SPN group (103% [SD 18%] of energy target), compared with 20 kcal/kg per day (7) for the EN group (77% [27%]). Between days 9 and 28, 41 (27%) of 153 patients in the SPN group had a nosocomial infection compared with 58 (38%) of 152 patients in the EN group (hazard ratio 0·65, 95% CI 0·43-0·97; p=0·0338), and the SPN group had a lower mean number of nosocomial infections per patient (-0·42 [-0·79 to -0·05]; p=0·0248). INTERPRETATION: Individually optimised energy supplementation with SPN starting 4 days after ICU admission could reduce nosocomial infections and should be considered as a strategy to improve clinical outcome in patients in the ICU for whom EN is insufficient. FUNDING: Foundation Nutrition 2000Plus, ICU Quality Funds, Baxter, and Fresenius Kabi.
Resumo:
Complications related to the neck-stem junction of modular stems used for total hip arthroplasty (THA) are generating increasing concern. A 74-year-old male had increasing pain and a cutaneous reaction around the scar 1 year after THA with a modular neck-stem. Imaging revealed osteolysis of the calcar and a pseudo-tumour adjacent to the neck-stem junction. Serum cobalt levels were elevated. Revision surgery to exchange the stem and liner and to resect the pseudo-tumour was performed. Analysis of the stem by scanning electron microscopy and by energy dispersive X-ray and white light interferometry showed fretting corrosion at the neck-stem junction contrasting with minimal changes at the head-neck junction. Thus, despite dry assembly of the neck and stem on the back table at primary THA, full neck-stem contact was not achieved, and the resulting micromotion at the interface led to fretting corrosion. This case highlights the mechanism of fretting corrosion at the neck-stem interface responsible for adverse local tissue reactions. Clinical and radiological follow-up is mandatory in patients with dual-modular stems.
Resumo:
The diffusion of mobile telephony began in 1971 in Finland, when the first car phones, called ARP1 were taken to use. Technologies changed from ARP to NMT and later to GSM. The main application of the technology, however, was voice transfer. The birth of the Internet created an open public data network and easy access to other types of computer-based services over networks. Telephones had been used as modems, but the development of the cellular technologies enabled automatic access from mobile phones to Internet. Also other wireless technologies, for instance Wireless LANs, were also introduced. Telephony had developed from analog to digital in fixed networks and allowed easy integration of fixed and mobile networks. This development opened a completely new functionality to computers and mobile phones. It also initiated the merger of the information technology (IT) and telecommunication (TC) industries. Despite the arising opportunity for firms' new competition the applications based on the new functionality were rare. Furthermore, technology development combined with innovation can be disruptive to industries. This research focuses on the new technology's impact on competition in the ICT industry through understanding the strategic needs and alternative futures of the industry's customers. The change speed inthe ICT industry is high and therefore it was valuable to integrate the DynamicCapability view of the firm in this research. Dynamic capabilities are an application of the Resource-Based View (RBV) of the firm. As is stated in the literature, strategic positioning complements RBV. This theoretical framework leads theresearch to focus on three areas: customer strategic innovation and business model development, external future analysis, and process development combining these two. The theoretical contribution of the research is in the development of methodology integrating theories of the RBV, dynamic capabilities and strategic positioning. The research approach has been constructive due to the actual managerial problems initiating the study. The requirement for iterative and innovative progress in the research supported the chosen research approach. The study applies known methods in product development, for instance, innovation process in theGroup Decision Support Systems (GDSS) laboratory and Quality Function Deployment (QFD), and combines them with known strategy analysis tools like industry analysis and scenario method. As the main result, the thesis presents the strategic innovation process, where new business concepts are used to describe the alternative resource configurations and scenarios as alternative competitive environments, which can be a new way for firms to achieve competitive advantage in high-velocity markets. In addition to the strategic innovation process as a result, thestudy has also resulted in approximately 250 new innovations for the participating firms, reduced technology uncertainty and helped strategic infrastructural decisions in the firms, and produced a knowledge-bank including data from 43 ICT and 19 paper industry firms between the years 1999 - 2004. The methods presentedin this research are also applicable to other industries.
Resumo:
IIn electric drives, frequency converters are used to generatefor the electric motor the AC voltage with variable frequency and amplitude. When considering the annual sale of drives in values of money and units sold, the use of low-performance drives appears to be in predominant. These drives have tobe very cost effective to manufacture and use, while they are also expected to fulfill the harmonic distortion standards. One of the objectives has also been to extend the lifetime of the frequency converter. In a traditional frequency converter, a relatively large electrolytic DC-link capacitor is used. Electrolytic capacitors are large, heavy and rather expensive components. In many cases, the lifetime of the electrolytic capacitor is the main factor limiting the lifetime of the frequency converter. To overcome the problem, the electrolytic capacitor is replaced with a metallized polypropylene film capacitor (MPPF). The MPPF has improved properties when compared to the electrolytic capacitor. By replacing the electrolytic capacitor with a film capacitor the energy storage of the DC-linkwill be decreased. Thus, the instantaneous power supplied to the motor correlates with the instantaneous power taken from the network. This yields a continuousDC-link current fed by the diode rectifier bridge. As a consequence, the line current harmonics clearly decrease. Because of the decreased energy storage, the DC-link voltage fluctuates. This sets additional conditions to the controllers of the frequency converter to compensate the fluctuation from the supplied motor phase voltages. In this work three-phase and single-phase frequency converters with small DC-link capacitor are analyzed. The evaluation is obtained with simulations and laboratory measurements.
Resumo:
miR-21 is the most commonly over-expressed microRNA (miRNA) in cancer and a proven oncogene. Hsa-miR-21 is located on chromosome 17q23.2, immediately downstream of the vacuole membrane protein-1 (VMP1) gene, also known as TMEM49. VMP1 transcripts initiate ∼130 kb upstream of miR-21, are spliced, and polyadenylated only a few hundred base pairs upstream of the miR-21 hairpin. On the other hand, primary miR-21 transcripts (pri-miR-21) originate within the last introns of VMP1, but bypass VMP1 polyadenylation signals to include the miR-21 hairpin. Here, we report that VMP1 transcripts can also bypass these polyadenylation signals to include miR-21, thus providing a novel and independently regulated source of miR-21, termed VMP1–miR-21. Northern blotting, gene-specific RT-PCR, RNA pull-down and DNA branching assays support that VMP1–miR-21 is expressed at significant levels in a number of cancer cell lines and that it is processed by the Microprocessor complex to produce mature miR-21. VMP1 and pri-miR-21 are induced by common stimuli, such as phorbol-12-myristate-13-acetate (PMA) and androgens, but show differential responses to some stimuli such as epigenetic modifying agents. Collectively, these results indicate that miR-21 is a unique miRNA capable of being regulated by alternative polyadenylation and two independent gene promoters.
Resumo:
After the release of the gamma-ray source catalog produced by the Fermi satellite during its first two years of operation, a significant fraction of sources still remain unassociated at lower energies. In addition to well-known high-energy emitters (pulsars, blazars, supernova remnants, etc.), theoretical expectations predict new classes of gamma-ray sources. In particular, gamma-ray emission could be associated with some of the early phases of stellar evolution, but this interesting possibility is still poorly understood. Aims: The aim of this paper is to assess the possibility of the Fermi gamma-ray source 2FGL J0607.5-0618c being associated with the massive star forming region Monoceros R2. Methods: A multi-wavelength analysis of the Monoceros R2 region is carried out using archival data at radio, infrared, X-ray, and gamma-ray wavelengths. The resulting observational properties are used to estimate the physical parameters needed to test the different physical scenarios. Results: We confirm the 2FGL J0607.5-0618c detection with improved confidence over the Fermi two-year catalog. We find that a combined effect of the multiple young stellar objects in Monoceros R2 is a viable picture for the nature of the source.
Resumo:
The atmospheric Cherenkov gamma-ray telescope MAGIC, designed for a low-energy threshold, has detected very-high-energy gamma rays from a giant flare of the distant Quasi-Stellar Radio Source (in short: radio quasar) 3C 279, at a distance of more than 5 billion light-years (a redshift of 0.536). No quasar has been observed previously in very-high-energy gamma radiation, and this is also the most distant object detected emitting gamma rays above 50 gigaelectron volts. Because high-energy gamma rays may be stopped by interacting with the diffuse background light in the universe, the observations by MAGIC imply a low amount for such light, consistent with that known from galaxy counts.
Resumo:
The microquasar LS 5039 has recently been detected as a source of very high energy (VHE) $\gamma$-rays. This detection, that confirms the previously proposed association of LS 5039 with the EGRET source 3EG~J1824$-$1514, makes of LS 5039 a special system with observational data covering nearly all the electromagnetic spectrum. In order to reproduce the observed spectrum of LS 5039, from radio to VHE $\gamma$-rays, we have applied a cold matter dominated jet model that takes into account accretion variability, the jet magnetic field, particle acceleration, adiabatic and radiative losses, microscopic energy conservation in the jet, and pair creation and absorption due to the external photon fields, as well as the emission from the first generation of secondaries. The radiative processes taken into account are synchrotron, relativistic Bremsstrahlung and inverse Compton (IC). The model is based on a scenario that has been characterized with recent observational results, concerning the orbital parameters, the orbital variability at X-rays and the nature of the compact object. The computed spectral energy distribution (SED) shows a good agreement with the available observational data.
Resumo:
Managers can craft effective integrated strategy by properly assessing regulatory uncertainty. Leveraging the existing political markets literature, we predict regulatory uncertainty from the novel interaction of demand and supply side rivalries across a range of political markets. We argue for two primary drivers of regulatory uncertainty: ideology-motivated interests opposed to the firm and a lack of competition for power among political actors supplying public policy. We align three, previously disparate dimensions of nonmarket strategy - profile level, coalition breadth, and pivotal target - to levels of regulatory uncertainty. Through this framework, we demonstrate how and when firms employ different nonmarket strategies. To illustrate variation in nonmarket strategy across levels of regulatory uncertainty, we analyze several market entry decisions of foreign firms operating in the global telecommunications sector.