881 resultados para Communicative and pre-communicative tasks
Resumo:
Advances in wireless networking and content delivery systems are enabling new challenging provisioning scenarios where a growing number of users access multimedia services, e.g., audio/video streaming, while moving among different points of attachment to the Internet, possibly with different connectivity technologies, e.g., Wi-Fi, Bluetooth, and cellular 3G. That calls for novel middlewares capable of dynamically personalizing service provisioning to the characteristics of client environments, in particular to discontinuities in wireless resource availability due to handoffs. This dissertation proposes a novel middleware solution, called MUM, that performs effective and context-aware handoff management to transparently avoid service interruptions during both horizontal and vertical handoffs. To achieve the goal, MUM exploits the full visibility of wireless connections available in client localities and their handoff implementations (handoff awareness), of service quality requirements and handoff-related quality degradations (QoS awareness), and of network topology and resources available in current/future localities (location awareness). The design and implementation of the all main MUM components along with extensive on the field trials of the realized middleware architecture confirmed the validity of the proposed full context-aware handoff management approach. In particular, the reported experimental results demonstrate that MUM can effectively maintain service continuity for a wide range of different multimedia services by exploiting handoff prediction mechanisms, adaptive buffering and pre-fetching techniques, and proactive re-addressing/re-binding.
Resumo:
A major weakness of composite materials is that low-velocity impact, introduced accidentally during manufacture, operation or maintenance of the aircraft, may result in delaminations between the plies. Therefore, the first part of this study is focused on mechanics of curved laminates under impact. For this aim, the effect of preloading on impact response of curved composite laminates is considered. By applying the preload, the stress through the thickness and curvature of the laminates increased. The results showed that all impact parameters are varied significantly. For understanding the contribution rate of preloading and pre-stress on the obtained results another test is designed. The interesting phenomenon is that the preloading can decrease the damaged area when the curvature of the both specimens is the same. Finally the effect of curvature type, concave and convex, is investigated under impact loading. In the second part, a new composition of nanofibrous mats are developed to improve the efficiency of curved laminates under impact loading. Therefore, at first some fracture tests are conducted to consider the effect of Nylon 6,6, PCL, and their mixture on mode I and mode II fracture toughness. For this goal, nanofibers are electrospun and interleaved between mid-plane of laminate composite to conduct mode I and mode II tests. The results shows that efficiency of Nylon 6,6 is better than PCL in mode II, while the effect of PCL on fracture toughness of mode I is more. By mixing these nanofibers the shortage of the individual nanofibers is compensated and so the Nylon 6,6/PCL nanofibers could increased mode I and II fracture toughness. Then all these nanofibers are used between all layers of composite layers to investigate their effect on damaged area. The results showed that PCL could decrease the damaged area about 25% and Nylon 6,6 and mixed nanofibers about 50%.
Resumo:
Classic group recommender systems focus on providing suggestions for a fixed group of people. Our work tries to give an inside look at design- ing a new recommender system that is capable of making suggestions for a sequence of activities, dividing people in subgroups, in order to boost over- all group satisfaction. However, this idea increases problem complexity in more dimensions and creates great challenge to the algorithm’s performance. To understand the e↵ectiveness, due to the enhanced complexity and pre- cise problem solving, we implemented an experimental system from data collected from a variety of web services concerning the city of Paris. The sys- tem recommends activities to a group of users from two di↵erent approaches: Local Search and Constraint Programming. The general results show that the number of subgroups can significantly influence the Constraint Program- ming Approaches’s computational time and e�cacy. Generally, Local Search can find results much quicker than Constraint Programming. Over a lengthy period of time, Local Search performs better than Constraint Programming, with similar final results.
Resumo:
Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.
Resumo:
Background: Visuoperceptual deficits in dementia are common and can reduce quality of life. Testing of visuoperceptual function is often confounded by impairments in other cognitive domains and motor dysfunction. We aimed to develop, pilot, and test a novel visuocognitive prototype test battery which addressed these issues, suitable for both clinical and functional imaging use. Methods: We recruited 23 participants (14 with dementia, 6 of whom had extrapyramidal motor features, and 9 age-matched controls). The novel Newcastle visual perception prototype battery (NEVIP-B-Prototype) included angle, color, face, motion and form perception tasks, and an adapted response system. It allows for individualized task difficulties. Participants were tested outside and inside the 3T functional magnetic resonance imaging (fMRI) scanner. Functional magnetic resonance imaging data were analyzed using SPM8. Results: All participants successfully completed the task inside and outside the scanner. Functional magnetic resonance imaging analysis showed activation regions corresponding well to the regional specializations of the visual association cortex. In both groups, there was significant activity in the ventral occipital-temporal region in the face and color tasks, whereas the motion task activated the V5 region. In the control group, the angle task activated the occipitoparietal cortex. Patients and controls showed similar levels of activation, except on the angle task for which occipitoparietal activation was lower in patients than controls. Conclusion: Distinct visuoperceptual functions can be tested in patients with dementia and extrapyramidal motor features when tests use individualized thresholds, adapted tasks, and specialized response systems.
Resumo:
Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
Evidence suggests that the social cognition deficits prevalent in autism spectrum disorders (ASDs) are widely distributed in first degree and extended relatives. This ¿broader autism phenotype¿ (BAP) can be extended into non-clinical populations and show wide distributions of social behaviors such as empathy and social responsiveness ¿ with ASDs exhibiting these behaviors on the lower ends of the distributions. Little evidence has previously shown relationships between self-report measures of social cognition and more objective tasks such as face perception in functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs). In this study, three specific hypotheses were addressed: a) increased social ability, as measured by an increased Empathy Quotient, decreased Social Responsiveness Scale (SRS-A) score, and increased Social Attribution Task score, will predict increased activation of the fusiform gyrus in response to faces as compared to houses; b) these same measures will predict N170 amplitude and latency showing decreased latency and increased amplitude for faces as compared to houses with increased social ability; c) increased amygdala volume will predict increased fusiform gyrus activation when viewing faces as compared to houses. Findings supported all of the hypotheses. Empathy scores significantly predicted both right FFG activation [F(1,20) = 4.811, p = .041, ß = .450, R2 = 0.20] and left FFG activation [F(1,20) = 7.70, p = .012, ß = .537, R2 = 0.29]. Based on ERP results increased right lateralization face-related N170 was significantly predicted by the EQ [F(1,54) = 6.94, p = .011, ß = .338, R2 = 0.11]. Finally, total amygdala volume significantly predicted right [F(1,20) = 7.217, p = .014, ß = .515, R2 = 0.27] and left [F(1,20) = 36.77, p < .001, ß = .805, R2 = 0.65] FFG activation. Consistent with the a priori hypotheses, traits attributed to the BAP can significantly predict neural responses to faces in a non-clinical population. This is consistent with the face processing deficits seen in ASDs. The findings presented here contribute to the extension of the BAP from unaffected relatives of individuals with ASDs to the general population. These findings also give continued evidence in support of a continuous distribution of traits found in psychiatric illnesses in place of a traditional, dichotomous ¿all-or-nothing¿ diagnostic framework of neurodevelopmental and neuropsychiatric disorders.
Resumo:
Previous research conducted in the late 1980’s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over twenty-five years old, the data used in the previous research is no longer representative of the currently installed barriers or US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. The analysis included 1,383 (596,331 weighted) real-world barrier midsection impacts selected from thirteen years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS). For each suitable case, the scene diagram and available scene photographs were used to determine roadside and barrier specific variables not available in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors toward secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of seven compared to cases with no second event present. Twenty-four full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from NCHRP Report 350. It was found that the NCHRP Report 350 exit angle criterion alone was not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
This contribution investigates the evolution of diet in the Pan – Homo and hominin clades. It does this by focusing on 12 variables (nine dental and three mandibular) for which data are available about extant chimpanzees, modern humans and most extinct hominins. Previous analyses of this type have approached the interpretation of dental and gnathic function by focusing on the identification of the food consumed (i.e. fruits, leaves, etc.) rather than on the physical properties (i.e. hardness, toughness, etc.) of those foods, and they have not specifically addressed the role that the physical properties of foods play in determining dental adaptations. We take the available evidence for the 12 variables, and set out what the expression of each of those variables is in extant chimpanzees, the earliest hominins, archaic hominins, megadont archaic hominins, and an inclusive grouping made up of transitional hominins and pre-modern Homo . We then present hypotheses about what the states of these variables would be in the last common ancestor of the Pan – Homo clade and in the stem hominin. We review the physical properties of food and suggest how these physical properties can be used to investigate the functional morphology of the dentition. We show what aspects of anterior tooth morphology are critical for food preparation (e.g. peeling fruit) prior to its ingestion, which features of the postcanine dentition (e.g. overall and relative size of the crowns) are related to the reduction in the particle size of food, and how information about the macrostructure (e.g. enamel thickness) and microstructure (e.g. extent and location of enamel prism decussation) of the enamel cap might be used to make predictions about the types of foods consumed by extinct hominins. Specifically, we show how thick enamel can protect against the generation and propagation of cracks in the enamel that begin at the enamel– dentine junction and move towards the outer enamel surface.
Resumo:
This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.
Resumo:
AIMS/HYPOTHESIS: To assess the use of paediatric continuous subcutaneous infusion (CSII) under real-life conditions by analysing data recorded for up to 90 days and relating them to outcome. METHODS: Pump programming data from patients aged 0-18 years treated with CSII in 30 centres from 16 European countries and Israel were recorded during routine clinical visits. HbA(1c) was measured centrally. RESULTS: A total of 1,041 patients (age: 11.8 +/- 4.2 years; diabetes duration: 6.0 +/- 3.6 years; average CSII duration: 2.0 +/- 1.3 years; HbA(1c): 8.0 +/- 1.3% [means +/- SD]) participated. Glycaemic control was better in preschool (n = 142; 7.5 +/- 0.9%) and pre-adolescent (6-11 years, n = 321; 7.7 +/- 1.0%) children than in adolescent patients (12-18 years, n = 578; 8.3 +/- 1.4%). There was a significant negative correlation between HbA(1c) and daily bolus number, but not between HbA(1c) and total daily insulin dose. The use of <6.7 daily boluses was a significant predictor of an HbA(1c) level >7.5%. The incidence of severe hypoglycaemia and ketoacidosis was 6.63 and 6.26 events per 100 patient-years, respectively. CONCLUSIONS/INTERPRETATION: This large paediatric survey of CSII shows that glycaemic targets can be frequently achieved, particularly in young children, and the incidence of acute complications is low. Adequate substitution of basal and prandial insulin is associated with a better HbA(1c).
Resumo:
Osteoarthritis is thought to be caused by a combination of intrinsic vulnerabilities of the joint, such as anatomic shape and alignment, and environmental factors, such as body weight, injury, and overuse. It has been postulated that much of osteoarthritis is due to anatomic deformities. Advances in surgical techniques such as the periacetabular osteotomy, safe surgical dislocation of the hip, and hip arthroscopy have provided us with effective and safe tools to correct these anatomical problems. The limiting factor in treatment outcome in many mechanically compromised hips is the degree of cartilage damage which has occurred prior to treatment. In this regard, the role of imaging, utilizing plain radiographs in conjunction with magnetic resonance imaging, is becoming vitally important for the detection of these anatomic deformities and pre-radiographic arthritis. In this article, we will outline the plain radiographic features of hip deformities that can cause instability or impingement. Additionally, we will illustrate the use of MRI imaging to detect subtle anatomic abnormalities, as well as the use of biochemical imaging techniques such as dGEMRIC to guide clinical decision making.
Resumo:
The emissions, filtration and oxidation characteristics of a diesel oxidation catalyst (DOC) and a catalyzed particulate filter (CPF) in a Johnson Matthey catalyzed continuously regenerating trap (CCRT ®) were studied by using computational models. Experimental data needed to calibrate the models were obtained by characterization experiments with raw exhaust sampling from a Cummins ISM 2002 engine with variable geometry turbocharging (VGT) and programmed exhaust gas recirculation (EGR). The experiments were performed at 20, 40, 60 and 75% of full load (1120 Nm) at rated speed (2100 rpm), with and without the DOC upstream of the CPF. This was done to study the effect of temperature and CPF-inlet NO2 concentrations on particulate matter oxidation in the CCRT ®. A previously developed computational model was used to determine the kinetic parameters describing the oxidation characteristics of HCs, CO and NO in the DOC and the pressure drop across it. The model was calibrated at five temperatures in the range of 280 – 465° C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec. The downstream HCs, CO and NO concentrations were predicted by the DOC model to within ±3 ppm. The HCs and CO oxidation kinetics in the temperature range of 280 - 465°C and an exhaust volumetric flow rate of 0.447 - 0.843 act-m3/sec can be represented by one ’apparent’ activation energy and pre-exponential factor. The NO oxidation kinetics in the same temperature and exhaust flow rate range can be represented by ’apparent’ activation energies and pre-exponential factors in two regimes. The DOC pressure drop was always predicted within 0.5 kPa by the model. The MTU 1-D 2-layer CPF model was enhanced in several ways to better model the performance of the CCRT ®. A model to simulate the oxidation of particulate inside the filter wall was developed. A particulate cake layer filtration model which describes particle filtration in terms of more fundamental parameters was developed and coupled to the wall oxidation model. To better model the particulate oxidation kinetics, a model to take into account the NO2 produced in the washcoat of the CPF was developed. The overall 1-D 2-layer model can be used to predict the pressure drop of the exhaust gas across the filter, the evolution of particulate mass inside the filter, the particulate mass oxidized, the filtration efficiency and the particle number distribution downstream of the CPF. The model was used to better understand the internal performance of the CCRT®, by determining the components of the total pressure drop across the filter, by classifying the total particulate matter in layer I, layer II, the filter wall, and by the means of oxidation i.e. by O2, NO2 entering the filter and by NO2 being produced in the filter. The CPF model was calibrated at four temperatures in the range of 280 – 465 °C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec, in CPF-only and CCRT ® (DOC+CPF) configurations. The clean filter wall permeability was determined to be 2.00E-13 m2, which is in agreement with values in the literature for cordierite filters. The particulate packing density in the filter wall had values between 2.92 kg/m3 - 3.95 kg/m3 for all the loads. The mean pore size of the catalyst loaded filter wall was found to be 11.0 µm. The particulate cake packing densities and permeabilities, ranged from 131 kg/m3 - 134 kg/m3, and 0.42E-14 m2 and 2.00E-14 m2 respectively, and are in agreement with the Peclet number correlations in the literature. Particulate cake layer porosities determined from the particulate cake layer filtration model ranged between 0.841 and 0.814 and decreased with load, which is about 0.1 lower than experimental and more complex discrete particle simulations in the literature. The thickness of layer I was kept constant at 20 µm. The model kinetics in the CPF-only and CCRT ® configurations, showed that no ’catalyst effect’ with O2 was present. The kinetic parameters for the NO2-assisted oxidation of particulate in the CPF were determined from the simulation of transient temperature programmed oxidation data in the literature. It was determined that the thermal and NO2 kinetic parameters do not change with temperature, exhaust flow rate or NO2 concentrations. However, different kinetic parameters are used for particulate oxidation in the wall and on the wall. Model results showed that oxidation of particulate in the pores of the filter wall can cause disproportionate decreases in the filter pressure drop with respect to particulate mass. The wall oxidation model along with the particulate cake filtration model were developed to model the sudden and rapid decreases in pressure drop across the CPF. The particulate cake and wall filtration models result in higher particulate filtration efficiencies than with just the wall filtration model, with overall filtration efficiencies of 98-99% being predicted by the model. The pre-exponential factors for oxidation by NO2 did not change with temperature or NO2 concentrations because of the NO2 wall production model. In both CPF-only and CCRT ® configurations, the model showed NO2 and layer I to be the dominant means and dominant physical location of particulate oxidation respectively. However, at temperatures of 280 °C, NO2 is not a significant oxidizer of particulate matter, which is in agreement with studies in the literature. The model showed that 8.6 and 81.6% of the CPF-inlet particulate matter was oxidized after 5 hours at 20 and 75% load in CCRT® configuration. In CPF-only configuration at the same loads, the model showed that after 5 hours, 4.4 and 64.8% of the inlet particulate matter was oxidized. The increase in NO2 concentrations across the DOC contributes significantly to the oxidation of particulate in the CPF and is supplemented by the oxidation of NO to NO2 by the catalyst in the CPF, which increases the particulate oxidation rates. From the model, it was determined that the catalyst in the CPF modeslty increases the particulate oxidation rates in the range of 4.5 – 8.3% in the CCRT® configuration. Hence, the catalyst loading in the CPF of the CCRT® could possibly be reduced without significantly decreasing particulate oxidation rates leading to catalyst cost savings and better engine performance due to lower exhaust backpressures.
Resumo:
In recent years, growing attention has been devoted to the use of lignocellulosic biomass as a feedstock to produce renewable carbohydrates as a source of energy products, including liquid alternatives to fossil fuels. The benefits of developing woody biomass to ethanol technology are to increase the long-term national energy security, reduce fossil energy consumption, lower greenhouse gas emissions, use renewable rather than depletable resources, and create local jobs. Currently, research is driven by the need to reduce the cost of biomass-ethanol production. One of the preferred methods is to thermochemically pretreat the biomass material and subsequently, enzymatically hydrolyze the pretreated material to fermentable sugars that can then be converted to ethanol using specialized microorganisms. The goals of pretreatment are to remove the hemicellulose fraction from other biomass components, reduce bioconversion time, enhance enzymatic conversion of the cellulose fraction, and, hopefully, obtain a higher ethanol yield. The primary goal of this research is to obtain kinetic detailed data for dilute acid hydrolysis for several timber species from the Upper Peninsula of Michigan and switchgrass. These results will be used to identify optimum reaction conditions to maximize production of fermentable sugars and minimize production of non-fermentable byproducts. The structural carbohydrate analysis of the biomass species used in this project was performed using the procedure proposed by National Renewable Energy Laboratory (NREL). Subsequently, dilute acid-catalyzed hydrolysis of biomass, including aspen, basswood, balsam, red maple, and switchgrass, was studied at various temperatures, acid concentrations, and particle sizes in a 1-L well-mixed batch reactor (Parr Instruments, ii Model 4571). 25 g of biomass and 500 mL of diluted acid solution were added into a 1-L glass liner, and then put into the reactor. During the experiment, 5 mL samples were taken starting at 100°C at 3 min intervals until reaching the targeted temperature (160, 175, or 190°C), followed by 4 samples after achieving the desired temperature. The collected samples were then cooled in an ice bath immediately to stop the reaction. The cooled samples were filtered using 0.2 μm MILLIPORE membrane filter to remove suspended solids. The filtered samples were then analyzed using High Performance Liquid Chromatography (HPLC) with a Bio-Rad Aminex HPX-87P column, and refractive index detection to measure monomeric and polymeric sugars plus degradation byproducts. A first order reaction model was assumed and the kinetic parameters such as activation energy and pre-exponential factor from Arrhenius equation were obtained from a match between the model and experimental data. The reaction temperature increases linearly after 40 minutes during experiments. Xylose and other sugars were formed from hemicellulose hydrolysis over this heat up period until a maximum concentration was reached at the time near when the targeted temperature was reached. However, negligible amount of xylose byproducts and small concentrations of other soluble sugars, such as mannose, arabinose, and galactose were detected during this initial heat up period. Very little cellulose hydrolysis yielding glucose was observed during the initial heat up period. On the other hand, later in the reaction during the constant temperature period xylose was degraded to furfural. Glucose production from cellulose was increased during this constant temperature period at later time points in the reaction. The kinetic coefficient governing the generation of xylose from hemicellulose and the generation of furfural from xylose presented a coherent dependence on both temperature and acid concentration. However, no effect was observed in the particle size. There were three types of biomass used in this project; hardwood (aspen, basswood, and red maple), softwood (balsam), and a herbaceous crop (switchgrass). The activation energies and the pre-exponential factors of the timber species and switchgrass were in a range of 49 - 180 kJ/mol and from 7.5x104 - 2.6x1020 min-1, respectively, for the xylose formation model. In addition, for xylose degradation, the activation energies and the preexponential factors ranged from 130 - 170 kJ/mol and from 6.8x1013 - 3.7x1017 min-1, respectively. The results compare favorably with the literature values given by Ranganathan et al, 1985. Overall, up to 92 % of the xylose was able to generate from the dilute acid hydrolysis in this project.
Resumo:
One of two active volcanoes in the western branch of the East African Rift, Nyamuragira (1.408ºS, 29.20ºE; 3058 m) is located in the D.R. Congo. Nyamuragira emits large amounts of SO2 (up to ~1 Mt/day) and erupts low-silica, alkalic lavas, which achieve flow rates of up to ~20 km/hr. The source of the large SO2 emissions and pre-eruptive magma conditions were unknown prior to this study, and 1994-2010 lava volumes were only recently mapped via satellite imagery, mainly due to the region’s political instability. In this study, new olivine-hosted melt inclusion volatile (H2O, CO2, S, Cl, F) and major element data from five historic Nyamuragira eruptions (1912, 1938, 1948, 1986, 2006) are presented. Melt compositions derived from the 1986 and 2006 tephra samples best represent pre-eruptive volatile compositions because these samples contain naturally glassy inclusions that underwent less post-entrapment modification than crystallized inclusions. The total amount of SO2 released from the 1986 (0.04 Mt) and 2006 (0.06 Mt) eruptions are derived using the petrologic method, whereby S contents in melt inclusions are scaled to erupted lava volumes. These amounts are significantly less than satellite-based SO2 emissions for the same eruptions (1986 = ~1 Mt; 2006 = ~2 Mt). Potential explanations for this observation are: 1) accumulation of a vapor phase within the magmatic system that is only released during eruptions, and/or 2) syn-eruptive gas release from unerupted magma. Post-1994 Nyamuragira lava volumes were not available at the beginning of this study. These flows (along with others since 1967) are mapped with Landsat MSS, TM, and ETM+, Hyperion, and ALI satellite data and combined with published flow thicknesses to derive volumes. Satellite remote sensing data was also used to evaluate Nyamuragira SO2 emissions. These results show that the most recent Nyamuragira eruptions injected SO2 into the atmosphere between 15 km (2006 eruption) and 5 km (2010 eruption). This suggests that past effusive basaltic eruptions (e.g., Laki 1783) are capable of similar plume heights that reached the upper troposphere or tropopause, allowing SO2 and resultant aerosols to remain longer in the atmosphere, travel farther around the globe, and affect global climates.