890 resultados para Nelson and Siegel model
Resumo:
Renewable energy is growing in demand, and thus the the manufacture of solar cells and photovoltaic arrays has advanced dramatically in recent years. This is proved by the fact that the photovoltaic production has doubled every 2 years, increasing by an average of 48% each year since 2002. Covering the general overview of solar cell working, and its model, this thesis will start with the three generations of photovoltaic solar cell technology, and move to the motivation of dedicating research to nanostructured solar cell. For the current generation solar cells, among several factors, like photon capture, photon reflection, carrier generation by photons, carrier transport and collection, the efficiency also depends on the absorption of photons. The absorption coefficient,α, and its dependence on the wavelength, λ, is of major concern to improve the efficiency. Nano-silicon structures (quantum wells and quantum dots) have a unique advantage compared to bulk and thin film crystalline silicon that multiple direct and indirect band gaps can be realized by appropriate size control of the quantum wells. This enables multiple wavelength photons of the solar spectrum to be absorbed efficiently. There is limited research on the calculation of absorption coefficient in nano structures of silicon. We present a theoretical approach to calculate the absorption coefficient using quantum mechanical calculations on the interaction of photons with the electrons of the valence band. One model is that the oscillator strength of the direct optical transitions is enhanced by the quantumconfinement effect in Si nanocrystallites. These kinds of quantum wells can be realized in practice in porous silicon. The absorption coefficient shows a peak of 64638.2 cm-1 at = 343 nm at photon energy of ξ = 3.49 eV ( = 355.532 nm). I have shown that a large value of absorption coefficient α comparable to that of bulk silicon is possible in silicon QDs because of carrier confinement. Our results have shown that we can enhance the absorption coefficient by an order of 10, and at the same time a nearly constant absorption coefficient curve over the visible spectrum. The validity of plots is verified by the correlation with experimental photoluminescence plots. A very generic comparison for the efficiency of p-i-n junction solar cell is given for a cell incorporating QDs and sans QDs. The design and fabrication technique is discussed in brief. I have shown that by using QDs in the intrinsic region of a cell, we can improve the efficiency by a factor of 1.865 times. Thus for a solar cell of efficiency of 26% for first generation solar cell, we can improve the efficiency to nearly 48.5% on using QDs.
Resumo:
Understanding the canopy cover of an urban environment leads to better estimates of carbon storage and more informed management decisions by urban foresters. The most commonly used method for assessing urban forest cover type extent is ground surveys, which can be both timeconsuming and expensive. The analysis of aerial photos is an alternative method that is faster, cheaper, and can cover a larger number of sites, but may be less accurate. The objectives of this paper were (1) to compare three methods of cover type assessment for Los Angeles, CA: handdelineation of aerial photos in ArcMap, supervised classification of aerial photos in ERDAS Imagine, and ground-collected data using the Urban Forest Effects (UFORE) model protocol; (2) to determine how well remote sensing methods estimate carbon storage as predicted by the UFORE model; and (3) to explore the influence of tree diameter and tree density on carbon storage estimates. Four major cover types (bare ground, fine vegetation, coarse vegetation, and impervious surfaces) were determined from 348 plots (0.039 ha each) randomly stratified according to land-use. Hand-delineation was better than supervised classification at predicting ground-based measurements of cover type and UFORE model-predicted carbon storage. Most error in supervised classification resulted from shadow, which was interpreted as unknown cover type. Neither tree diameter or tree density per plot significantly affected the relationship between carbon storage and canopy cover. The efficiency of remote sensing rather than in situ data collection allows urban forest managers the ability to quickly assess a city and plan accordingly while also preserving their often-limited budget.
Resumo:
Nitrogen oxides play a crucial role in the budget of tropospheric ozone (O sub(3)) and the formation of the hydroxyl radical. Anthropogenic activities and boreal wildfires are large sources of emissions in the atmosphere. However, the influence of the transport of these emissions on nitrogen oxides and O sub(3) levels at hemispheric scales is not well understood, in particular due to a lack of nitrogen oxides measurements in remote regions. In order to address these deficiencies, measurements of NO, NO sub(2) and NO sub(y) (total reactive nitrogen oxides) were made in the lower free troposphere (FT) over the central North Atlantic region (Pico Mountain station, 38 degree N 28 degree W, 2.3 km asl) from July 2002 to August 2005. These measurements reveal a well-defined seasonal cycle of nitrogen oxides (NO sub(x) = NO+NO sub(2) and NO sub(y)) in the background central North Atlantic lower FT, with higher mixing ratios during the summertime. Observed NO sub(x) and NO sub(y) levels are consistent with long-range transport of emissions, but with significant removal en-route to the measurement site. Reactive nitrogen largely exists in the form of PAN and HNO sub(3) ( similar to 80-90% of NO sub(y)) all year round. A shift in the composition of NO sub(y) from dominance of PAN to dominance of HNO sub(3) occurs from winter-spring to summer-fall, as a result of changes in temperature and photochemistry over the region. Analysis of the long-range transport of boreal wildfire emissions on nitrogen oxides provides evidence of the very large-scale impacts of boreal wildfires on the tropospheric NO sub(x) and O sub(3) budgets. Boreal wildfire emissions are responsible for significant shifts in the nitrogen oxides distributions toward higher levels during the summer, with medians of NO sub(y) (117-175 pptv) and NO sub(x) (9-30 pptv) greater in the presence of boreal wildfire emissions. Extreme levels of NO sub(x) (up to 150 pptv) and NO sub(y) (up to 1100 pptv) observed in boreal wildfire plumes suggest that decomposition of PAN to NO sub(x) is a significant source of NO sub(x), and imply that O sub(3) formation occurs during transport. Ozone levels are also significantly enhanced in boreal wildfire plumes. However, a complex behavior of O sub(3) is observed in the plumes, which varies from significant to lower O sub(3) production to O sub(3) destruction. Long-range transport of anthropogenic emissions from North America also has a significant influence on the regional NO sub(x) and O sub(3) budgets. Transport of pollution from North America causes significant enhancements on nitrogen oxides year-round. Enhancements of CO, NO sub(y) and NO sub(x) indicate that, consistent with previous studies, more than 95% of the NO sub(x) emitted over the U.S. is removed before and during export out of the U.S. boundary layer. However, about 30% of the NO sub(x) emissions exported out of the U.S. boundary layer remain in the airmasses. Since the lifetime of NO sub(x) is shorter than the transport timescale, PAN decomposition and potentially photolysis of HNO sub(3) provide a supply of NO sub(x) over the central North Atlantic lower FT. Observed Delta O sub(3)/ Delta NO sub(y) and large NO sub(y) levels remaining in the North American plumes suggest potential O sub(3) formation well downwind from North America. Finally, a comparison of the nitrogen oxides measurements with results from the global chemical transport (GCT) model GEOS-Chem identifies differences between the observations and the model. GEOS-Chem reproduces the seasonal variation of nitrogen oxides over the central North Atlantic lower FT, but does not capture the magnitude of the cycles. Improvements in our understanding of nitrogen oxides chemistry in the remote FT and emission sources are necessary for the current GCT models to adequately estimate the impacts of emissions on tropospheric NO sub(x) and the resulting impacts on the O sub(3) budget.
Resumo:
This thesis covers the correction, and verification, development, and implementation of a computational fluid dynamics (CFD) model for an orifice plate meter. Past results were corrected and further expanded on with compressibility effects of acoustic waves being taken into account. One dynamic pressure difference transducer measures the time-varying differential pressure across the orifice meter. A dynamic absolute pressure measurement is also taken at the inlet of the orifice meter, along with a suitable temperature measurement of the mean flow gas. Together these three measurements allow for an incompressible CFD simulation (using a well-tested and robust model) for the cross-section independent time-varying mass flow rate through the orifice meter. The mean value of this incompressible mass flow rate is then corrected to match the mean of the measured flow rate( obtained from a Coriolis meter located up stream of the orifice meter). Even with the mean and compressibility corrections, significant differences in the measured mass flow rates at two orifice meters in a common flow stream were observed. This means that the compressibility effects associated with pulsatile gas flows is significant in the measurement of the time-varying mass flow rate. Future work (with the approach and initial runs covered here) will provide an indirect verification of the reported mass flow rate measurements.
Resumo:
The purpose of this project is to take preliminary steps towards the development of a QUAL2Kw model for Silver Bow Creek, MT. These preliminary steps include initial research and familiarization with QUAL2Kw, use of ArcGIS to fill in geospatial data gaps, and integration of QUAL2Kw and ArcGIS. The integration involves improvement of the QUAL2Kw model output through adding functionality to the model itself, and development of a QUAL2Kw specific tool in ArcGIS. These improvements are designed to help expedite and simplify the viewing of QUAL2Kw output data spatially in ArcGIS as opposed to graphically within QUAL2Kw. These improvements will allow users to quickly and easily view the many output parameters of each model run geographically within ArcGIS. This will make locating potential problem areas or “hot spots” much quicker and easier than interpreting the QUAL2Kw output data from a graph alone. The added functionality of QUAL2KW was achieved through the development of an excel Macro, and the tool in ArcGIS was developed using python scripting and the model builder feature in ArcGIS.
Resumo:
The intention of a loan loss provision is the anticipation of the loan's expected losses by adjusting the book value of the loan. Furthermore, this loan loss provision has to be compared to the expected loss according to Basel II and, in the case of a difference, liable equity has to be adjusted. This however assumes that the loan loss provision and the expected loss are based on a similar economic rationale, which is only valid conditionally in current loan loss provisioning methods according to IFRS. Therefore, differences between loan loss provisions and expected losses should only result from different approaches regarding the parameter estimation within each model and not due to different assumptions regarding the outcome of the model. The provisioning and accounting model developed in this paper overcomes the before-mentioned shortcomings and is consistent with an economic rationale of expected losses. Additionally, this model is based on a close-to-market valuation of the loan that is in favor of the basic idea of IFRS. Suggestions for changes in current accounting and capital requirement rules are provided.
Resumo:
Forests near the Mediterranean coast have been shaped by millennia of human disturbance. Consequently, ecological studies relying on modern observations or historical records may have difficulty assessing natural vegetation dynamics under current and future climate. We combined a sedimentary pollen record from Lago di Massacciucoli, Tuscany, Italy with simulations from the LandClim dynamic vegetation model to determine what vegetation preceded intense human disturbance, how past changes in vegetation relate to fire and browsing, and the potential of an extinct vegetation type under present climate. We simulated vegetation dynamics near Lago di Massaciucoli for the last 7,000 years using a local chironomid-inferred temperature reconstruction with combinations of three fire regimes (small infrequent, large infrequent, small frequent) and three browsing intensities (no browsing, light browsing, and moderate browsing), and compared model output to pollen data. Simulations with low disturbance support pollen-inferred evidence for a mixed forest dominated by Quercus ilex (a Mediterranean species) and Abies alba (a montane species). Whereas pollen data record the collapse of A. alba after 6000 cal yr bp, simulated populations expanded with declining summer temperatures during the late Holocene. Simulations with increased fire and browsing are consistent with evidence for expansion by deciduous species after A. alba collapsed. According to our combined paleo-environmental and modeling evidence, mixed Q. ilex and A. alba forests remain possible with current climate and limited disturbance, and provide a viable management objective for ecosystems near the Mediterranean coast and in regions that are expected to experience a mediterranean-type climate in the future.
Resumo:
OBJECTIVES To evaluate the effect of a tin-containing fluoride (Sn/F) mouth rinse on microtensile bond strength (μTBS) between resin composite and erosively demineralised dentin. MATERIALS AND METHODS Dentin of 120 human molars was erosively demineralised using a 10-day cyclic de- and remineralisation model. For 40 molars, the model comprised erosive demineralisation only; for another 40, the model included treatment with a NaF solution; and for yet another 40, the model included treatment with a Sn/F mouth rinse. In half of these molars (n = 20), the demineralised organic matrix was continuously removed by collagenase. Silicon carbide paper-ground, non-erosively demineralised molars served as control (n = 20). Subsequently, μTBS of Clearfil SE/Filtek Z250 to the dentin was measured, and failure mode was determined. Additionally, surfaces were evaluated using SEM and EDX. RESULTS Compared to the non-erosively demineralised control, erosive demineralisation resulted in significantly lower μTBS regardless of the removal of demineralised organic matrix. Treatment with NaF increased μTBS, but the level of μTBS obtained by the non-erosively demineralised control was only reached when the demineralised organic matrix had been removed. The Sn/F mouth rinse together with removal of demineralised organic matrix led to significantly higher µTBS than did the non-erosively demineralised control. The Sn/F mouth rinse yielded higher μTBS than did the NaF solution. CONCLUSIONS Treatment of erosively demineralised dentin with a NaF solution or a Sn/F mouth rinse increased the bond strength of resin composite. CLINICAL RELEVANCE Bond strength of resin composite to eroded dentin was not negatively influenced by treatment with a tin-containing fluoride mouth rinse.
Resumo:
The aim of this study was to assess the effect of the moments generated with low- and high-torque brackets. Four different bracket prescription-slot combinations of the same bracket type (Mini Diamond® Twin) were evaluated: high-torque 0.018 and 0.022 inch and low-torque 0.018 and 0.022 inch. These brackets were bonded on identical maxillary acrylic resin models with levelled and aligned teeth and each model was mounted on the orthodontic measurement and simulation system (OMSS). Ten specimens of 0.017 × 0.025 inch and ten 0.019 × 0.025 inch stainless steel archwires (ORMCO) were evaluated in the low- and high-torque 0.018 inch and 0.022 inch brackets, respectively. The wires were ligated with elastomerics into the brackets and each measurement was repeated once after religation. Two-way analysis of variance and t-test were conducted to compare the generated moments between wires at low- and high-torque brackets separately. The maximum moment generated by the 0.017 × 0.025 inch stainless steel archwire in the 0.018 inch brackets at +15 degrees ranged from 14.33 and 12.95 Nmm for the high- and low-torque brackets, respectively. The measured torque in the 0.022 inch brackets with the 0.019 × 0.025 inch stainless steel archwire was 9.32 and 6.48 Nmm, respectively. The recorded differences of maximum moments between the high- and low-torque series were statistically significant. High-torque brackets produced higher moments compared with low-torque brackets. Additionally, in both high- and low-torque configurations, the thicker 0.019 × 0.025 inch steel archwire in the 0.022 inch slot system generated lower moments in comparison with the 0.017 × 0.025 inch steel archwire in the 0.018 inch slot system.
Resumo:
OBJECTIVES This study sought to validate the Logistic Clinical SYNTAX (Synergy Between Percutaneous Coronary Intervention With Taxus and Cardiac Surgery) score in patients with non-ST-segment elevation acute coronary syndromes (ACS), in order to further legitimize its clinical application. BACKGROUND The Logistic Clinical SYNTAX score allows for an individualized prediction of 1-year mortality in patients undergoing contemporary percutaneous coronary intervention. It is composed of a "Core" Model (anatomical SYNTAX score, age, creatinine clearance, and left ventricular ejection fraction), and "Extended" Model (composed of an additional 6 clinical variables), and has previously been cross validated in 7 contemporary stent trials (>6,000 patients). METHODS One-year all-cause death was analyzed in 2,627 patients undergoing percutaneous coronary intervention from the ACUITY (Acute Catheterization and Urgent Intervention Triage Strategy) trial. Mortality predictions from the Core and Extended Models were studied with respect to discrimination, that is, separation of those with and without 1-year all-cause death (assessed by the concordance [C] statistic), and calibration, that is, agreement between observed and predicted outcomes (assessed with validation plots). Decision curve analyses, which weight the harms (false positives) against benefits (true positives) of using a risk score to make mortality predictions, were undertaken to assess clinical usefulness. RESULTS In the ACUITY trial, the median SYNTAX score was 9.0 (interquartile range 5.0 to 16.0); approximately 40% of patients had 3-vessel disease, 29% diabetes, and 85% underwent drug-eluting stent implantation. Validation plots confirmed agreement between observed and predicted mortality. The Core and Extended Models demonstrated substantial improvements in the discriminative ability for 1-year all-cause death compared with the anatomical SYNTAX score in isolation (C-statistics: SYNTAX score: 0.64, 95% confidence interval [CI]: 0.56 to 0.71; Core Model: 0.74, 95% CI: 0.66 to 0.79; Extended Model: 0.77, 95% CI: 0.70 to 0.83). Decision curve analyses confirmed the increasing ability to correctly identify patients who would die at 1 year with the Extended Model versus the Core Model versus the anatomical SYNTAX score, over a wide range of thresholds for mortality risk predictions. CONCLUSIONS Compared to the anatomical SYNTAX score alone, the Core and Extended Models of the Logistic Clinical SYNTAX score more accurately predicted individual 1-year mortality in patients presenting with non-ST-segment elevation acute coronary syndromes undergoing percutaneous coronary intervention. These findings support the clinical application of the Logistic Clinical SYNTAX score.
Resumo:
Primary loss of photoreceptors caused by diseases such as retinitis pigmentosa is one of the main causes of blindness worldwide. To study such diseases, rodent models of N-methyl-N-nitrosourea (MNU)-induced retinal degeneration are widely used. As zebrafish (Danio rerio) are a popular model system for visual research that offers persistent retinal neurogenesis throughout the lifetime and retinal regeneration after severe damage, we have established a novel MNU-induced model in this species. Histology with staining for apoptosis (TUNEL), proliferation (PCNA), activated Müller glial cells (GFAP), rods (rhodopsin) and cones (zpr-1) were performed. A characteristic sequence of retinal changes was found. First, apoptosis of rod photoreceptors occurred 3 days after MNU treatment and resulted in a loss of rod cells. Consequently, proliferation started in the inner nuclear layer (INL) with a maximum at day 8, whereas in the outer nuclear layer (ONL) a maximum was observed at day 15. The proliferation in the ONL persisted to the end of the follow-up (3 months), interestingly, without ongoing rod cell death. We demonstrate that rod degeneration is a sufficient trigger for the induction of Müller glial cell activation, even if only a minimal number of rod cells undergo cell death. In conclusion, the use of MNU is a simple and feasible model for rod photoreceptor degeneration in the zebrafish that offers new insights into rod regeneration.
Resumo:
The study assessed the economic efficiency of different strategies for the control of post-weaning multi-systemic wasting syndrome (PMWS) and porcine circovirus type 2 subclinical infection (PCV2SI), which have a major economic impact on the pig farming industry worldwide. The control strategies investigated consisted on the combination of up to 5 different control measures. The control measures considered were: (1) PCV2 vaccination of piglets (vac); (2) ensuring age adjusted diet for growers (diets); (3) reduction of stocking density (stock); (4) improvement of biosecurity measures (bios); and (5) total depopulation and repopulation of the farm for the elimination of other major pathogens (DPRP). A model was developed to simulate 5 years production of a pig farm with a 3-weekly batch system and with 100 sows. A PMWS/PCV2SI disease and economic model, based on PMWS severity scores, was linked to the production model in order to assess disease losses. This PMWS severity scores depends on the combination post-weaning mortality, PMWS morbidity in younger pigs and proportion of PCV2 infected pigs observed on farms. The economic analysis investigated eleven different farm scenarios, depending on the number of risk factors present before the intervention. For each strategy, an investment appraisal assessed the extra costs and benefits of reducing a given PMWS severity score to the average score of a slightly affected farm. The net present value obtained for each strategy was then multiplied by the corresponding probability of success to obtain an expected value. A stochastic simulation was performed to account for uncertainty and variability. For moderately affected farms PCV2 vaccination alone was the most cost-efficient strategy, but for highly affected farms it was either PCV2 vaccination alone or in combination with biosecurity measures, with the marginal profitability between 'vac' and 'vac+bios' being small. Other strategies such as 'diets', 'vac+diets' and 'bios+diets' were frequently identified as the second or third best strategy. The mean expected values of the best strategy for a moderately and a highly affected farm were £14,739 and £57,648 after 5 years, respectively. This is the first study to compare economic efficiency of control strategies for PMWS and PCV2SI. The results demonstrate the economic value of PCV2 vaccination, and highlight that on highly affected farms biosecurity measures are required to achieve optimal profitability. The model developed has potential as a farm-level decision support tool for the control of this economically important syndrome.
Resumo:
BACKGROUND Pulmonary fibrosis may result from abnormal alveolar wound repair after injury. Hepatocyte growth factor (HGF) improves alveolar epithelial wound repair in the lung. Stem cells were shown to play a major role in lung injury, repair and fibrosis. We studied the presence, origin and antifibrotic properties of HGF-expressing stem cells in usual interstitial pneumonia. METHODS Immunohistochemistry was performed in lung tissue sections and primary alveolar epithelial cells obtained from patients with usual interstitial pneumonia (UIP, n = 7). Bone marrow derived stromal cells (BMSC) from adult male rats were transfected with HGF, instilled intratracheally into bleomycin injured rat lungs and analyzed 7 and 14 days later. RESULTS In UIP, HGF was expressed in specific cells mainly located in fibrotic areas close to the hyperplastic alveolar epithelium. HGF-positive cells showed strong co-staining for the mesenchymal stem cell markers CD44, CD29, CD105 and CD90, indicating stem cell origin. HGF-positive cells also co-stained for CXCR4 (HGF+/CXCR4+) indicating that they originate from the bone marrow. The stem cell characteristics were confirmed in HGF secreting cells isolated from UIP lung biopsies. In vivo experiments showed that HGF-expressing BMSC attenuated bleomycin induced pulmonary fibrosis in the rat, indicating a beneficial role of bone marrow derived, HGF secreting stem cells in lung fibrosis. CONCLUSIONS HGF-positive stem cells are present in human fibrotic lung tissue (UIP) and originate from the bone marrow. Since HGF-transfected BMSC reduce bleomycin induced lung fibrosis in the bleomycin lung injury and fibrosis model, we assume that HGF-expressing, bone-marrow derived stem cells in UIP have antifibrotic properties.
Resumo:
Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.
Resumo:
Similar to other health care processes, referrals are susceptible to breakdowns. These breakdowns in the referral process can lead to poor continuity of care, slow diagnostic processes, delays and repetition of tests, patient and provider dissatisfaction, and can lead to a loss of confidence in providers. These facts and the necessity for a deeper understanding of referrals in healthcare served as the motivation to conduct a comprehensive study of referrals. The research began with the real problem and need to understand referral communication as a mean to improve patient care. Despite previous efforts to explain referrals and the dynamics and interrelations of the variables that influence referrals there is not a common, contemporary, and accepted definition of what a referral is in the health care context. The research agenda was guided by the need to explore referrals as an abstract concept by: 1) developing a conceptual definition of referrals, and 2) developing a model of referrals, to finally propose a 3) comprehensive research framework. This dissertation has resulted in a standard conceptual definition of referrals and a model of referrals. In addition a mixed-method framework to evaluate referrals was proposed, and finally a data driven model was developed to predict whether a referral would be approved or denied by a specialty service. The three manuscripts included in this dissertation present the basis for studying and assessing referrals using a common framework that should allow an easier comparative research agenda to improve referrals taking into account the context where referrals occur.