872 resultados para Large-scale experiments


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The West African Monsoon (WAM) and its representation in numerical models are strongly influenced by the Saharan Heat Low (SHL), a low-pressure system driven by radiative heating over the central Sahara and ventilated by the cold and moist inflow from adjacent oceans. It has recently been shown that a significant part of the southerly moisture flux into the SHL originates from convective cold pools over the Sahel. These density currents driven by evaporation of rain are largely absent in models with parameterized convection. This crucial issue has been hypothesized to contribute to the inability of many climate models to reproduce the variability of the WAM. Here, the role of convective cold pools approaching the SHL from the Atlas Mountains, which are a strong orographic trigger for deep convection in Northwest Africa, is analyzed. Knowledge about the frequency of these events, as well as their impact on large-scale dynamics, is required to understand their contribution to the variability of the SHL and to known model uncertainties. The first aspect is addressed through the development of an objective and automated method for the generation of multi-year climatologies not available before. The algorithm combines freely available standard surface observations with satellite microwave data. Representativeness of stations and influence of their spatial density are addressed by comparison to a satellite-only climatology. Applying this algorithm to data from automated weather stations and manned synoptic stations in and south of the Atlas Mountains reveals the frequent occurrence. On the order of 6 events per month are detected from May to September when the SHL is in its northernmost position. The events tend to cluster into several-days long convectively active periods, often with strong events on consecutive days. This study is the first to diagnose dynamical impacts of such periods on the SHL, based on simulations of two example cases using the Weather Research and Forecast (WRF) model at convection-permitting resolution. Sensitivity experiments with artificially removed cold pools as well as different resolutions and parameterizations are conducted. Results indicate increases in surface pressure of more than 1 hPa and significant moisture transports into the desert over several days. This moisture affects radiative heating and thus the energy balance of the SHL. Even though cold pool events north of the SHL are less frequent when compared to their Sahelian counterparts, it is shown that they gain importance due to their temporal clustering on synoptic timescale. Together with studies focusing on the Sahel, this work emphasizes the need for improved parameterization schemes for deep convection in order to produce more reliable climate projections for the WAM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reconstructing Northern Hemisphere ice-sheet oscillations and meltwater routing to the ocean is important to better understand the mechanisms behind abrupt climate changes. To date, research efforts have mainly focused on the North American (Laurentide) ice-sheets (LIS), leaving the potential role of the European Ice Sheet (EIS), and of the Scandinavian ice-sheet (SIS) in particular, largely unexplored. Using neodymium isotopes in detrital sediments deposited off the Channel River, we provide a continuous and well-dated record for the evolution of the EIS southern margin through the end of the last glacial period and during the deglaciation. Our results reveal that the evolution of EIS margins was accompanied with substantial ice recession (especially of the SIS) and simultaneous release of meltwater to the North Atlantic. These events occurred both in the course of the EIS to its LGM position (i.e., during Heinrich Stadial –HS– 3 and HS2; ∼31–29 ka and ∼26–23 ka, respectively) and during the deglaciation (i.e., at ∼22 ka, ∼20–19 ka and from 18.2 ± 0.2 to 16.7 ± 0.2 ka that corresponds to the first part of HS1). The deglaciation was discontinuous in character, and similar in timing to that of the southern LIS margin, with moderate ice-sheet retreat (from 22.5 ± 0.2 ka in the Baltic lowlands) as soon as the northern summer insolation increase (from ∼23 ka) and an acceleration of the margin retreat thereafter (from ∼20 ka). Importantly, our results show that EIS retreat events and release of meltwater to the North Atlantic during the deglaciation coincide with AMOC destabilisation and interhemispheric climate changes. They thus suggest that the EIS, together with the LIS, could have played a critical role in the climatic reorganization that accompanied the last deglaciation. Finally, our data suggest that meltwater discharges to the North Atlantic produced by large-scale recession of continental parts of Northern Hemisphere ice sheets during HS, could have been a possible source for the oceanic perturbations (i.e., AMOC shutdown) responsible for the marine-based ice stream purge cycle, or so-called HE's, that punctuate the last glacial period.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is devoted to the development, synthesis, properties, and applications of nano materials for critical technologies, including three areas: (1) Microbial contamination of drinking water is a serious problem of global significance. About 51% of the waterborne disease outbreaks in the United States can be attributed to contaminated ground water. Development of metal oxide nanoparticles, as viricidal materials is of technological and fundamental scientific importance. Nanoparticles with high surface areas and ultra small particle sizes have dramatically enhanced efficiency and capacity of virus inactivation, which cannot be achieved by their bulk counterparts. A series of metal oxide nanoparticles, such as iron oxide nanoparticles, zinc oxide nanoparticles and iron oxide-silver nanoparticles, coated on fiber substrates was developed in this research for evaluation of their viricidal activity. We also carried out XRD, TEM, SEM, XPS, surface area measurements, and zeta potential of these nanoparticles. MS2 virus inactivation experiments showed that these metal oxide nanoparticle coated fibers were extremely powerful viricidal materials. Results from this research suggest that zinc oxide nanoparticles with diameter of 3.5 nm, showing an isoelectric point (IEP) at 9.0, were well dispersed on fiberglass. These fibers offer an increase in capacity by orders of magnitude over all other materials. Compared to iron oxide nanoparticles, zinc oxide nanoparticles didn’t show an improvement in inactivation kinetics but inactivation capacities did increase by two orders of magnitude to 99.99%. Furthermore, zinc oxide nanoparticles have higher affinity to viruses than the iron oxide nanoparticles in presence of competing ions. The advantages of zinc oxide depend on high surface charge density, small nanoparticle sizes and capabilities of generating reactive oxygen species. The research at its present stage of development appears to offer the best avenue to remove viruses from water. Without additional chemicals and energy input, this system can be implemented by both points of use (POU) and large-scale use water treatment technology, which will have a significant impact on the water purification industry. (2) A new family of aliphatic polyester lubricants has been developed for use in micro-electromechanical systems (MEMS), specifically for hard disk drives that operate at high spindle speeds (>15000rpm). Our program was initiated to address current problems with spin-off of the perfluoroether (PFPE) lubricants. The new polyester lubricant appears to alleviate spin-off problems and at the same time improves the chemical and thermal stability. This new system provides a low cost alternative to PFPE along with improved adhesion to the substrates. In addition, it displays a much lower viscosity, which may be of importance to stiction related problems. The synthetic route is readily scalable in case additional interest emerges in other areas including small motors. (3) The demand for increased signal transmission speed and device density for the next generation of multilevel integrated circuits has placed stringent demands on materials performance. Currently, integration of the ultra low-k materials in dual Damascene processing requires chemical mechanical polishing (CMP) to planarize the copper. Unfortunately, none of the commercially proposed dielectric candidates display the desired mechanical and thermal properties for successful CMP. A new polydiacetylene thermosetting polymer (DEB-TEB), which displays a low dielectric constant (low-k) of 2.7, was recently developed. This novel material appears to offer the only avenue for designing an ultra low k dielectric (1.85k), which can still display the desired modulus (7.7Gpa) and hardness (2.0Gpa) sufficient to withstand the process of CMP. We focused on further characterization of the thermal properties of spin-on poly (DEB-TEB) ultra-thin film. These include the coefficient of thermal expansion (CTE), biaxial thermal stress, and thermal conductivity. Thus the CTE is 2.0*10-5K-1 in the perpendicular direction and 8.0*10-6 K-1 in the planar direction. The low CTE provides a better match to the Si substrate which minimizes interfacial stress and greatly enhances the reliability of the microprocessors. Initial experiments with oxygen plasma etching suggest a high probability of success for achieving vertical profiles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A oportunidade de produção de biomassa microalgal tem despertado interesse pelos diversos destinos que a mesma pode ter, seja na produção de bioenergia, como fonte de alimento ou servindo como produto da biofixação de dióxido de carbono. Em geral, a produção em larga escala de cianobactérias e microalgas é feita com acompanhamento através de análises físicoquímicas offline. Neste contexto, o objetivo deste trabalho foi monitorar a concentração celular em fotobiorreator raceway para produção de biomassa microalgal usando técnicas de aquisição digital de dados e controle de processos, pela aquisição de dados inline de iluminância, concentração de biomassa, temperatura e pH. Para tal fim foi necessário construir sensor baseado em software capaz de determinar a concentração de biomassa microalgal a partir de medidas ópticas de intensidade de radiação monocromática espalhada e desenvolver modelo matemático para a produção da biomassa microalgal no microcontrolador, utilizando algoritmo de computação natural no ajuste do modelo. Foi projetado, construído e testado durante cultivos de Spirulina sp. LEB 18, em escala piloto outdoor, um sistema autônomo de registro de informações advindas do cultivo. Foi testado um sensor de concentração de biomassa baseado na medição da radiação passante. Em uma segunda etapa foi concebido, construído e testado um sensor óptico de concentração de biomassa de Spirulina sp. LEB 18 baseado na medição da intensidade da radiação que sofre espalhamento pela suspensão da cianobactéria, em experimento no laboratório, sob condições controladas de luminosidade, temperatura e fluxo de suspensão de biomassa. A partir das medidas de espalhamento da radiação luminosa, foi construído um sistema de inferência neurofuzzy, que serve como um sensor por software da concentração de biomassa em cultivo. Por fim, a partir das concentrações de biomassa de cultivo, ao longo do tempo, foi prospectado o uso da plataforma Arduino na modelagem empírica da cinética de crescimento, usando a Equação de Verhulst. As medidas realizadas no sensor óptico baseado na medida da intensidade da radiação monocromática passante através da suspensão, usado em condições outdoor, apresentaram baixa correlação entre a concentração de biomassa e a radiação, mesmo para concentrações abaixo de 0,6 g/L. Quando da investigação do espalhamento óptico pela suspensão do cultivo, para os ângulos de 45º e 90º a radiação monocromática em 530 nm apresentou um comportamento linear crescente com a concentração, apresentando coeficiente de determinação, nos dois casos, 0,95. Foi possível construir um sensor de concentração de biomassa baseado em software, usando as informações combinadas de intensidade de radiação espalhada nos ângulos de 45º e 135º com coeficiente de determinação de 0,99. É factível realizar simultaneamente a determinação inline de variáveis do processo de cultivo de Spirulina e a modelagem cinética empírica do crescimento do micro-organismo através da equação de Verhulst, em microcontrolador Arduino.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methanol is an important and versatile compound with various uses as a fuel and a feedstock chemical. Methanol is also a potential chemical energy carrier. Due to the fluctuating nature of renewable energy sources such as wind or solar, storage of energy is required to balance the varying supply and demand. Excess electrical energy generated at peak periods can be stored by using the energy in the production of chemical compounds. The conventional industrial production of methanol is based on the gas-phase synthesis from synthesis gas generated from fossil sources, primarily natural gas. Methanol can also be produced by hydrogenation of CO2. The production of methanol from CO2 captured from emission sources or even directly from the atmosphere would allow sustainable production based on a nearly limitless carbon source, while helping to reduce the increasing CO2 concentration in the atmosphere. Hydrogen for synthesis can be produced by electrolysis of water utilizing renewable electricity. A new liquid-phase methanol synthesis process has been proposed. In this process, a conventional methanol synthesis catalyst is mixed in suspension with a liquid alcohol solvent. The alcohol acts as a catalytic solvent by enabling a new reaction route, potentially allowing the synthesis of methanol at lower temperatures and pressures compared to conventional processes. For this thesis, the alcohol promoted liquid phase methanol synthesis process was tested at laboratory scale. Batch and semibatch reaction experiments were performed in an autoclave reactor, using a conventional Cu/ZnO catalyst and ethanol and 2-butanol as the alcoholic solvents. Experiments were performed at the pressure range of 30-60 bar and at temperatures of 160-200 °C. The productivity of methanol was found to increase with increasing pressure and temperature. In the studied process conditions a maximum volumetric productivity of 1.9 g of methanol per liter of solvent per hour was obtained, while the maximum catalyst specific productivity was found to be 40.2 g of methanol per kg of catalyst per hour. The productivity values are low compared to both industrial synthesis and to gas-phase synthesis from CO2. However, the reaction temperatures and pressures employed were lower compared to gas-phase processes. While the productivity is not high enough for large-scale industrial operation, the milder reaction conditions and simple operation could prove useful for small-scale operations. Finally, a preliminary design for an alcohol promoted, liquid-phase methanol synthesis process was created using the data obtained from the experiments. The demonstration scale process was scaled to an electrolyzer unit producing 1 Nm3 of hydrogen per hour. This Master’s thesis is closely connected to LUT REFLEX-platform.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The focus of this research is to explore the applications of the finite difference formulation based on the latency insertion method (LIM) to the analysis of circuit interconnects. Special attention is devoted to addressing the issues that arise in very large networks such as on-chip signal and power distribution networks. We demonstrate that the LIM has the power and flexibility to handle various types of analysis required at different stages of circuit design. The LIM is particularly suitable for simulations of very large scale linear networks and can significantly outperform conventional circuit solvers (such as SPICE).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite recent advances in ocean observing arrays and satellite sensors, there remains great uncertainty in the large-scale spatial variations of upper ocean salinity on the interannual to decadal timescales. Consonant with both broad-scale surface warming and the amplification of the global hydrological cycle, observed global multidecadal salinity changes typically have focussed on the linear response to anthropogenic forcing but not on salinity variations due to changes in the static stability and or variability due to the intrinsic ocean or internal climate processes. Here, we examine the static stability and spatiotemporal variability of upper ocean salinity across a hierarchy of models and reanalyses. In particular, we partition the variance into time bands via application of singular spectral analysis, considering sea surface salinity (SSS), the Brunt Väisälä frequency (N2), and the ocean salinity stratification in terms of the stabilizing effect due to the haline part of N2 over the upper 500m. We identify regions of significant coherent SSS variability, either intrinsic to the ocean or in response to the interannually varying atmosphere. Based on consistency across models (CMIP5 and forced experiments) and reanalyses, we identify the stabilizing role of salinity in the tropics—typically associated with heavy precipitation and barrier layer formation, and the role of salinity in destabilizing upper ocean stratification in the subtropical regions where large-scale density compensation typically occurs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Global concentration of CO2 in the atmosphere is increasing rapidly. CO2 emissions have huge impact on global climate change. Therefore, efficient CO2 emission abatement strategies such as Carbon Capture and Storage (CCS) are required to combat this phenomenon. There are three major approaches for CCS: - Post-combustion capture; - Pre-combustion capture; - Oxyfuel process. Post-combustion capture offers some advantages in terms of cost as existing combustion technologies can still be used without radical changes on them. This makes post-combustion capture easier to implement as a retrofit option compared to the other two approaches. Therefore, post-combustion capture is probably the first technology that will be deployed on a large scale. The aim of this work is to study the adsorption equilibrium of CO2, CH4 and N2 in zeolite 5A at 40ºC. For this, experiments were performed to determine the isotherms of adsorption of CO2, CH4 and N2 near 40ºC with the conditions of the post-combustion capture processes. It has been found that the 5A zeolite adsorbs a significant quantity of CO2 values of about 5 mol/kg at a pressure of 5 bar.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this dissertation is to evaluate the potential downstream influence of the Indian Ocean (IO) on El Niño/Southern Oscillation (ENSO) forecasts through the oceanic pathway of the Indonesian Throughflow (ITF), atmospheric teleconnections between the IO and Pacific, and assimilation of IO observations. Also the impact of sea surface salinity (SSS) in the Indo-Pacific region is assessed to try to address known problems with operational coupled model precipitation forecasts. The ITF normally drains warm fresh water from the Pacific reducing the mixed layer depths (MLD). A shallower MLD amplifies large-scale oceanic Kelvin/Rossby waves thus giving ~10% larger response and more realistic ENSO sea surface temperature (SST) variability compared to observed when the ITF is open. In order to isolate the impact of the IO sector atmospheric teleconnections to ENSO, experiments are contrasted that selectively couple/decouple the interannual forcing in the IO. The interannual variability of IO SST forcing is responsible for 3 month lagged widespread downwelling in the Pacific, assisted by off-equatorial curl, leading to warmer NINO3 SST anomaly and improved ENSO validation (significant from 3-9 months). Isolating the impact of observations in the IO sector using regional assimilation identifies large-scale warming in the IO that acts to intensify the easterlies of the Walker circulation and increases pervasive upwelling across the Pacific, cooling the eastern Pacific, and improving ENSO validation (r ~ 0.05, RMS~0.08C). Lastly, the positive impact of more accurate fresh water forcing is demonstrated to address inadequate precipitation forecasts in operational coupled models. Aquarius SSS assimilation improves the mixed layer density and enhances mixing, setting off upwelling that eventually cools the eastern Pacific after 6 months, counteracting the pervasive warming of most coupled models and significantly improving ENSO validation from 5-11 months. In summary, the ITF oceanic pathway, the atmospheric teleconnection, the impact of observations in the IO, and improved Indo-Pacific SSS are all responsible for ENSO forecast improvements, and so each aspect of this study contributes to a better overall understanding of ENSO. Therefore, the upstream influence of the IO should be thought of as integral to the functioning of ENSO phenomenon.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Terephthalic acid (PTA) is one of the monomers used for the synthesis of the polyester, polyethylene terephthalate (PET), that is used for the large-scale manufacture of synthetic fibers and plastic bottles. PTA is largely produced from the liquid-phase oxidation of petroleum-derived p-xylene (PX). However, there are now ongoing worldwide efforts exploring alternative routes for producing PTA from renewable, biomass resources.

In this thesis, I present a new route to PTA starting from the biomass-derived platform chemical, 5-hydroxymethylfurfural (HMF). This route utilizes new, selective Diels-Alder-dehydration reactions involving ethylene and is advantageous over the previously proposed Diels-Alder-dehydration route to PTA from HMF via 2,5-dimethylfuran (DMF) since the H2 reduction of HMF to DMF is avoided. Specifically, oxidized derivatives of HMF are reacted as is, or after etherification-esterification with methanol, with ethylene over solid Lewis acid catalysts that do not contain strong Brønsted acids in order to synthesize intermediates of PTA and its equally important diester, dimethyl terephthalate (DMT). The partially oxidized HMF, 5-(hydroxymethyl)furoic acid (HMFA) is reacted with high pressure ethylene over a pure-silica molecular sieve catalyst containing framework tin (Sn-Beta) to produce the Diels-Alder-dehydration product, 4-(hydroxymethyl)benzoic acid (HMBA), with ~30% selectivity at ~20% yield. If HMFA is protected with methanol to form methyl 5-(methoxymethyl)furan-2-carboxylate (MMFC), MMFC can react with ethylene in the presence of a pure-silica molecular sieve containing framework zirconium (Zr-Beta) to produce methyl 4-(methoxymethyl)benzenecarboxylate (MMBC) with >70% selectivity at >20% yield. HMBA and MMBC can then be oxidized to produce PTA and DMT, respectively. When Lewis acid containing mesoporous silica (MCM-41) and amorphous silica, or Brønsted acid containing zeolites (Al-Beta), are used as catalysts, a significant decrease in selectivity/yield of the Diels-Alder-dehydration product is observed.

An investigation to elucidate the reaction network and side products in the conversion of MMFC to MMBC was performed, and the main side products are found to be methyl 4-formylcyclohexa-1,3-diene-1-carboxylate and the ethylene Diels-Alder adduct of this cyclohexadiene. These products presumably form by a different dehydration pathway of the MMFC/ethylene Diels-Alder adduct and should be included when determining the overall selectivity to PTA or DMT since, like MMBC, these compounds are precursors to PTA or DMT.

Fundamental physical and chemical information on the ethylene Diels-Alder-dehydration reactions catalyzed by the Lewis acid-containing molecular sieves was obtained. Madon-Boudart experiments using Zr-Beta as catalyst show that the reaction rates are limited by chemical kinetics only (physical transport limitations are not present), all the Zr4+ centers are incorporated into the framework of the molecular sieve, and the whole molecular sieve crystal is accessible for catalysis. Apparent activation energies using Zr-Beta are low, suggesting that the overall activation energy of the system may be determined by a collection of terms and is not the true activation energy of a single chemical step.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Racism continues to thrive on the Internet. Yet, little is known about racism in online settings and the potential consequences. The purpose of this study was to develop the Perceived Online Racism Scale (PORS), the first measure to assess people’s perceived online racism experiences as they interact with others and consume information on the Internet. Items were developed through a multi-stage process based on literature review, focus-groups, and qualitative data collection. Based on a racially diverse large-scale sample (N = 1023), exploratory and confirmatory factor analyses provided support for a 30-item bifactor model with the following three factors: (a) 14-item PORS-IP (personal experiences of racism in online interactions), (b) 5-item PORS-V (observations of other racial/ethnic minorities being offended), and (c) 11-item PORS-I (consumption of online contents and information denigrating racial/ethnic minorities and highlighting racial injustice in society). Initial construct validity examinations suggest that PORS is significantly linked to psychological distress.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Discovery of microRNAs (miRNAs) relies on predictive models for characteristic features from miRNA precursors (pre-miRNAs). The short length of miRNA genes and the lack of pronounced sequence features complicate this task. To accommodate the peculiarities of plant and animal miRNAs systems, tools for both systems have evolved differently. However, these tools are biased towards the species for which they were primarily developed and, consequently, their predictive performance on data sets from other species of the same kingdom might be lower. While these biases are intrinsic to the species, their characterization can lead to computational approaches capable of diminishing their negative effect on the accuracy of pre-miRNAs predictive models. We investigate in this study how 45 predictive models induced for data sets from 45 species, distributed in eight subphyla/classes, perform when applied to a species different from the species used in its induction. Results: Our computational experiments show that the separability of pre-miRNAs and pseudo pre-miRNAs instances is species-dependent and no feature set performs well for all species, even within the same subphylum/class. Mitigating this species dependency, we show that an ensemble of classifiers reduced the classification errors for all 45 species. As the ensemble members were obtained using meaningful, and yet computationally viable feature sets, the ensembles also have a lower computational cost than individual classifiers that rely on energy stability parameters, which are of prohibitive computational cost in large scale applications. Conclusion: In this study, the combination of multiple pre-miRNAs feature sets and multiple learning biases enhanced the predictive accuracy of pre-miRNAs classifiers of 45 species. This is certainly a promising approach to be incorporated in miRNA discovery tools towards more accurate and less species-dependent tools.