868 resultados para Spectral and nonlinear optical characteristics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we investigated, for two years and with a bi-monthly frequency, how physical, chemical, and biological processes affect the marine carbonate system in a coastal area characterized by high alkalinity riverine discharge (Gulf of Trieste, northern Adriatic Sea, Mediterranean Sea).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was conducted to assess the effect of air-dried Moringa stenopetala leaf (MSL) supplementation on carcass components and meat quality in Arsi-Bale goats. A total of 24 yearling goats with initial body weight of 13.6+/-0.25 kg were randomly divided into four treatments with six goats each. All goats received a basal diet of natural grass hay ad libitum and 340 g head^(−1) d^(−1) concentrate. The treatment diets contain a control diet without supplementation (T1) and diets supplemented with MSL at a rate of 120 g head^(−1) d^(−1) (T2), 170 g head^(−1) d^(−1) (T3) and 220 g head^(−1) d^(−1) (T4). The results indicated that the average slaughter weight of goats reared on T3 and T4 was 18.2 and 18.3 kg, respectively, being (P<0.05) higher than those of T1 (15.8 kg) and T2 (16.5 kg). Goats fed on T3 and T4 diets had higher (P<0.05) daily weight gain compared with those of T1 and T2. The hot carcass weight in goats reared on T3 and T4 diets was 6.40 and 7.30 kg, respectively, being (P<0.05) higher than those of T1 (4.81 kg) and T2 (5.06 kg). Goats reared on T4 had higher (P<0.05) dressing percentage than those reared in other treatment diets. The rib-eye area in goats reared on T2, T3 and T4 diets was higher (P<0.05) than those of T1. The protein content of the meat in goats reared on T3 and T4 was 24.0 and 26.4%, respectively being significantly higher than those of T1 (19.1%) and T2 (20.1%). In conclusion, the supplementation of MSL to natural grass hay improved the weight gain and carcass parts of Arsi-Bale goats indicating Moringa leaves as alternative protein supplements to poor quality forages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent years, vibration-based structural damage identification has been subject of significant research in structural engineering. The basic idea of vibration-based methods is that damage induces mechanical properties changes that cause anomalies in the dynamic response of the structure, which measures allow to localize damage and its extension. Vibration measured data, such as frequencies and mode shapes, can be used in the Finite Element Model Updating in order to adjust structural parameters sensible at damage (e.g. Young’s Modulus). The novel aspect of this thesis is the introduction into the objective function of accurate measures of strains mode shapes, evaluated through FBG sensors. After a review of the relevant literature, the case of study, i.e. an irregular prestressed concrete beam destined for roofing of industrial structures, will be presented. The mathematical model was built through FE models, studying static and dynamic behaviour of the element. Another analytical model was developed, based on the ‘Ritz method’, in order to investigate the possible interaction between the RC beam and the steel supporting table used for testing. Experimental data, recorded through the contemporary use of different measurement techniques (optical fibers, accelerometers, LVDTs) were compared whit theoretical data, allowing to detect the best model, for which have been outlined the settings for the updating procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How rainfall infiltration rate and soil hydrological characteristics develop over time under forests of different ages in temperate regions is poorly understood. In this study, infiltration rate and soil hydrological characteristics were investigated under forests of different ages and under grassland. Soil hydraulic characteristics were measured at different scales under a 250-year-old grazed grassland (GL), 6-year-old (6yr) and 48-year-old (48yr) Scots pine (Pinus sylvestris) plantations, remnant 300-year-old individual Scots pine (OT) and a 4000-year-old Caledonian Forest (AF). In situ field-saturated hydraulic conductivity (Kfs) was measured, and visible root:soil area was estimated from soil pits. Macroporosity, pore structure and macropore connectivity were estimated from X-ray tomography of soil cores, and from water-release characteristics. At all scales, the median values for Kfs, root fraction, macroporosity and connectivity values tended to AF>OT>48yr>GL>6yr, indicating that infiltration rates and water storage increased with forest age. The remnant Caledonian Forest had a huge range of Kfs (12 to >4922mmh-1), with maximum Kfs values 7 to 15 times larger than those of 48-year-old Scots pine plantation, suggesting that undisturbed old forests, with high rainfall and minimal evapotranspiration in winter, may act as important areas for water storage and sinks for storm rainfall to infiltrate and transport to deeper soil layers via preferential flow. The importance of the development of soil hydrological characteristics under different aged forests is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The transistor laser is a unique three-port device that operates simultaneously as a transistor and a laser. With quantum wells incorporated in the base regions of heterojunction bipolar transistors, the transistor laser possesses advantageous characteristics of fast base spontaneous carrier lifetime, high differential optical gain, and electrical-optical characteristics for direct “read-out” of its optical properties. These devices have demonstrated many useful features such as high-speed optical transmission without the limitations of resonance, non-linear mixing, frequency multiplication, negative resistance, and photon-assisted switching. To date, all of these devices operate as multi-mode lasers without any type of wavelength selection or stabilizing mechanisms. Stable single-mode distributed feedback diode laser sources are important in many applications including spectroscopy, as pump sources for amplifiers and solid-state lasers, for use in coherent communication systems, and now as TLs potentially for integrated optoelectronics. The subject of this work is to expand the future applications of the transistor laser by demonstrating the theoretical background, process development and device design necessary to achieve singlelongitudinal- mode operation in a three-port transistor laser. A third-order distributed feedback surface grating is fabricated in the top emitter AlGaAs confining layers using soft photocurable nanoimprint lithography. The device produces continuous wave laser operation with a peak wavelength of 959.75 nm and threshold current of 13 mA operating at -70 °C. For devices with cleaved ends a side-mode suppression ratio greater than 25 dB has been achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Memristori on yksi elektroniikan peruskomponenteista vastuksen, kondensaattorin ja kelan lisäksi. Se on passiivinen komponentti, jonka teorian kehitti Leon Chua vuonna 1971. Kesti kuitenkin yli kolmekymmentä vuotta ennen kuin teoria pystyttiin yhdistämään kokeellisiin tuloksiin. Vuonna 2008 Hewlett Packard julkaisi artikkelin, jossa he väittivät valmistaneensa ensimmäisen toimivan memristorin. Memristori eli muistivastus on resistiivinen komponentti, jonka vastusarvoa pystytään muuttamaan. Nimens mukaisesti memristori kykenee myös säilyttämään vastusarvonsa ilman jatkuvaa virtaa ja jännitettä. Tyypillisesti memristorilla on vähintään kaksi vastusarvoa, joista kumpikin pystytään valitsemaan syöttämällä komponentille jännitettä tai virtaa. Tämän vuoksi memristoreita kutsutaankin usein resistiivisiksi kytkimiksi. Resistiivisiä kytkimiä tutkitaan nykyään paljon erityisesti niiden mahdollistaman muistiteknologian takia. Resistiivisistä kytkimistä rakennettua muistia kutsutaan ReRAM-muistiksi (lyhenne sanoista resistive random access memory). ReRAM-muisti on Flash-muistin tapaan haihtumaton muisti, jota voidaan sähköisesti ohjelmoida tai tyhjentää. Flash-muistia käytetään tällä hetkellä esimerkiksi muistitikuissa. ReRAM-muisti mahdollistaa kuitenkin nopeamman ja vähävirtaiseman toiminnan Flashiin verrattuna, joten se on tulevaisuudessa varteenotettava kilpailija markkinoilla. ReRAM-muisti mahdollistaa myös useammin bitin tallentamisen yhteen muistisoluun binäärisen (”0” tai ”1”) toiminnan sijaan. Tyypillisesti ReRAM-muistisolulla on kaksi rajoittavaa vastusarvoa, mutta näiden kahden tilan välille pystytään mahdollisesti ohjelmoimaan useampia tiloja. Muistisoluja voidaan kutsua analogisiksi, jos tilojen määrää ei ole rajoitettu. Analogisilla muistisoluilla olisi mahdollista rakentaa tehokkaasti esimerkiksi neuroverkkoja. Neuroverkoilla pyritään mallintamaan aivojen toimintaa ja suorittamaan tehtäviä, jotka ovat tyypillisesti vaikeita perinteisille tietokoneohjelmille. Neuroverkkoja käytetään esimerkiksi puheentunnistuksessa tai tekoälytoteutuksissa. Tässä diplomityössä tarkastellaan Ta2O5 -perustuvan ReRAM-muistisolun analogista toimintaa pitäen mielessä soveltuvuus neuroverkkoihin. ReRAM-muistisolun valmistus ja mittaustulokset käydään läpi. Muistisolun toiminta on harvoin täysin analogista, koska kahden rajoittavan vastusarvon välillä on usein rajattu määrä tiloja. Tämän vuoksi toimintaa kutsutaan pseudoanalogiseksi. Mittaustulokset osoittavat, että yksittäinen ReRAM-muistisolu kykenee binääriseen toimintaan hyvin. Joiltain osin yksittäinen solu kykenee tallentamaan useampia tiloja, mutta vastusarvoissa on peräkkäisten ohjelmointisyklien välillä suurta vaihtelevuutta, joka hankaloittaa tulkintaa. Valmistettu ReRAM-muistisolu ei sellaisenaan kykene toimimaan pseudoanalogisena muistina, vaan se vaati rinnalleen virtaa rajoittavan komponentin. Myös valmistusprosessin kehittäminen vähentäisi yksittäisen solun toiminnassa esiintyvää varianssia, jolloin sen toiminta muistuttaisi enemmän pseudoanalogista muistia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present study was to determine feed intake and average weight gain and to evaluate the ruminal morphologic characteristics of Saanen kids slaughtered at 30, 45 and 60 days of age, according to a completely randomized design. Thirty-six non-castrated male Saanen kids were fed ground total ration, pelleted total ration, or extruded total ration. Feed intake and refusals were controlled daily and the animals were weighed at birth and then once a week. Newborn kids received a milk replacer and were weaned at 45 days. Immediately after slaughter, the animals were eviscerated, the entire digestive apparatus was removed from the carcass. The reticulo-rumen was separated, emptied, washed and weighed. Samples were collected from the dorsal sac, pillar area and ventral sac of the rumen, fixed for about 24h in Bouin's solution, dehydrated, embedded in Histosec and cut into 5 mu m sections. Results showed that dry matter intake (DMI) at weaning and post-weaning and weight gain were higher (P < 0.05) in animals that received the pelleted total ration. The weight of the reticulo-rumen accompanied body development and was heavier in these animals. Histologically, after weaning ruminal papillae were more developed in animals that received pelleted total ration. Length of papillae increased with increase of age. The ratio of papillary height to papillary width increased with age in the ventral sac and until weaning (P > 0.05). We conclude that the pelleting process of the total ration favored increased intake, with a 46.7% increase in weight gain and increase in rumen weight and papillae length, suggesting that best results are obtained with this processing. In general, no difference was observed between the results obtained with extruded and ground total ration, although animals fed extruded total ration showed an increase in rumen weight and papillae width. (c) 2004 Elsevier B.V All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper summarizes the literature on hedge funds (HFs) developed over the last two decades, particularly that which relates to risk management characteristics (a companion piece investigates the managerial characteristics of HFs). It discusses the successes and the shortfalls to date in developing more sophisticated risk management frameworks and tools to measure and monitor HF risks, and the empirical evidence on the role of the HFs and their investment behaviour and risk management practices on the stability of the financial system. It also classifies the HF literature considering the most recent contributions and, particularly, the regulatory developments after the 2007 financial crisis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose To compare measurements taken using a swept-source optical coherence tomography-based optical biometer (IOLmaster 700) and an optical low-coherence reflectometry biometer (Lenstar 900), and to determine the clinical impacts of differences in their measurements on intraocular lens (IOL) power predictions. Methods Eighty eyes of 80 patients scheduled to undergo cataract surgery were examined with both biometers. The measurements made using each device were axial length (AL), central corneal thickness (CCT), aqueous depth (AQD), lens thickness (LT), mean keratometry (MK), white-to-white distance (WTW), and pupil diameter (PD). Holladay 2 and SRK/T formulas were used to calculate IOL power. Differences in measurement between the two biometers were determined using the paired t-test. Agreement was assessed through intraclass correlation coefficients (ICC) and Bland–Altman plots. Results Mean patient age was 76.3±6.8 years (range 59–89). Using the Lenstar, AL and PD could not be measured in 12.5 and 5.25% of eyes, respectively, while IOLMaster 700 took all measurements in all eyes. The variables CCT, AQD, LT, and MK varied significantly between the two biometers. According to ICCs, correlation between measurements made with both devices was excellent except for WTW and PD. Using the SRK/T formula, IOL power prediction based on the data from the two devices were statistically different, but differences were not clinically significant. Conclusions No clinically relevant differences were detected between the biometers in terms of their measurements and IOL power predictions. Using the IOLMaster 700, it was easier to obtain biometric measurements in eyes with less transparent ocular media or longer AL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.