933 resultados para Organic domain (fine), edge-to-edge grain crushing
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. ^ Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. ^ This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model’s parsing mechanism. ^ The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents. ^
Resumo:
This research addresses the problem of cost estimation for product development in engineer-to-order (ETO) operations. An ETO operation starts the product development process with a product specification and ends with delivery of a rather complicated, highly customized product. ETO operations are practiced in various industries such as engineering tooling, factory plants, industrial boilers, pressure vessels, shipbuilding, bridges and buildings. ETO views each product as a delivery item in an industrial project and needs to make an accurate estimation of its development cost at the bidding and/or planning stage before any design or manufacturing activity starts. ^ Many ETO practitioners rely on an ad hoc approach to cost estimation, with use of past projects as reference, adapting them to the new requirements. This process is often carried out on a case-by-case basis and in a non-procedural fashion, thus limiting its applicability to other industry domains and transferability to other estimators. In addition to being time consuming, this approach usually does not lead to an accurate cost estimate, which varies from 30% to 50%. ^ This research proposes a generic cost modeling methodology for application in ETO operations across various industry domains. Using the proposed methodology, a cost estimator will be able to develop a cost estimation model for use in a chosen ETO industry in a more expeditious, systematic and accurate manner. ^ The development of the proposed methodology was carried out by following the meta-methodology as outlined by Thomann. Deploying the methodology, cost estimation models were created in two industry domains (building construction and the steel milling equipment manufacturing). The models are then applied to real cases; the cost estimates are significantly more accurate than the actual estimates, with mean absolute error rate of 17.3%. ^ This research fills an important need of quick and accurate cost estimation across various ETO industries. It differs from existing approaches to the problem in that a methodology is developed for use to quickly customize a cost estimation model for a chosen application domain. In addition to more accurate estimation, the major contributions are in its transferability to other users and applicability to different ETO operations. ^
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
The Everglades is a sub-tropical coastal wetland characterized among others by its hydrological features and deposits of peat. Formation and preservation of organic matter in soils and sediments in this wetland ecosystem is critical for its sustainability and hydrological processes are important divers in the origin, transport and fate of organic matter. With this in mind, organic matter dynamics in the greater Florida Everglades was studied though various organic geochemistry techniques, especially biomarkers, bulk and compound specific δ13C and δD isotope analysis. The main objectives were focused on how different hydrological regimes in this ecosystem control organic matter dynamics, such as the mobilization of particulate organic matter (POM) in freshwater marshes and estuaries, and how organic geochemistry techniques can be applied to reconstruct Everglades paleo-hydrology. For this purpose organic matter in typical vegetation, floc, surface soils, soil cores, and estuarine suspended particulates were characterized in samples selected along hydrological gradients in the Water Conservation Area 3, Shark River Slough and Taylor Slough. ^ This research focused on three general themes: (1) Assessment of the environmental dynamics and source-specific particulate organic carbon export in a mangrove-dominated estuary. (2) Assessment of the origin, transport and fate of organic matter in freshwater marsh. (3) Assessment of historical changes in hydrological conditions in the Everglades (paleo-hydrology) though biomarkes and compound specific isotope analyses. This study reports the first estimate of particulate organic carbon loss from mangrove ecosystems in the Everglades, provides evidence for particulate organic matter transport with regards to the formation of ridge and slough landscapes in the Everglades, and demonstrates the applicability of the combined biomarker and compound-specific stable isotope approach as a means to generate paleohydrological data in wetlands. The data suggests that: (1) Carbon loss from mangrove estuaries is roughly split 50/50 between dissolved and particulate carbon; (2) hydrological remobilization of particulate organic matter from slough to ridge environments may play an important role in the maintenance of the Everglades freshwater landscape; and (3) Historical changes in hydrology have resulted in significant vegetation shifts from historical slough type vegetation to present ridge type vegetation. ^
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.
Resumo:
The feasibility of using the corn cob to obtain a polymer matrix composite was studied. To obtain the bran, corncob passed the drying process in a solar dryer, and was subsequently triturated in forage and to obtain the different particle sizes, by sieving. Three different grain sizes were used: fine particles (FP) size between 0,10 and 2mm; sized particles (PM) with sizes between 2,10 and 3,35 mm; large particles (PG) sizes between 3,45 and 4,10 mm. Using 20% of residue relative to the resin, the test samples were constructed for characterization of the composite, taking into account thermal and mechanical parameters. The main advantage of the proposed composite is that it has a low density, below the relative resin, about 1.06 kg / m³ for the PG. The composite showed a mechanical behavior less than of the resin to the grain sizes and for all formulations studied. Showed better results for the bending, reaching 25.3 MPa for the PG. The composite also showed be feasible for thermal applications, with thermal conductivity less than 0.21 W / m, ranking as insulation. In terms of homogeneity of the mixture, the most viable grain size is the PF, which also showed improved aesthetics and better processability. This composite can be used to make structures that do not require significant mechanical strength, such as tables, chairs, planks, and solar and wind prototypes, such as ovens and cookers and turbines blades.
Resumo:
The feasibility of using the corn cob to obtain a polymer matrix composite was studied. To obtain the bran, corncob passed the drying process in a solar dryer, and was subsequently triturated in forage and to obtain the different particle sizes, by sieving. Three different grain sizes were used: fine particles (FP) size between 0,10 and 2mm; sized particles (PM) with sizes between 2,10 and 3,35 mm; large particles (PG) sizes between 3,45 and 4,10 mm. Using 20% of residue relative to the resin, the test samples were constructed for characterization of the composite, taking into account thermal and mechanical parameters. The main advantage of the proposed composite is that it has a low density, below the relative resin, about 1.06 kg / m³ for the PG. The composite showed a mechanical behavior less than of the resin to the grain sizes and for all formulations studied. Showed better results for the bending, reaching 25.3 MPa for the PG. The composite also showed be feasible for thermal applications, with thermal conductivity less than 0.21 W / m, ranking as insulation. In terms of homogeneity of the mixture, the most viable grain size is the PF, which also showed improved aesthetics and better processability. This composite can be used to make structures that do not require significant mechanical strength, such as tables, chairs, planks, and solar and wind prototypes, such as ovens and cookers and turbines blades.
Resumo:
Brazil is among the largest cashew nut producers of the world. However, the roasting process is still carried out artisanally, especially in the Brazilian semiarid region. In face of this occupational problem, the aim of this study was to perform a physical-chemical characterization of the particulate matter (PM) emitted by the roasting of cashew nuts, as well as to determine the occupational risk and molecular mechanisms associated. The most evident PM characteristics were the prevalence of fine particles, typical biomass burning morphologies such as tar ball and the presence of the elements K, Cl, S, Ca and Fe. In addition, atmospheric modeling analyses suggest that these particles can reach neighboring regions of the emission source. Polycyclic aromatic hydrocarbons (PAHs) with carcinogenic potential, such as benzo[a]pyrene, dibenz[a,h]anthracene, benzo[a]anthracene, benzo[b]fluoranthene, chrysene, benzo[k]fluoranthene, indeno[1,2,3-c,d]pyrene and benzo[j]fluoranthene were the most abundant PAHs found in the two air monitoring campaigns. Among the identified oxy-PAH the benzanthrone (7H-benz[d,e]anthracen-7-one) had the highest concentration and the evaluation of lifetime cancer risk showed an increase of 12 to 37 cases of cancer for every 10,000 exposed people. Chemical analysis of roasted cashew nuts identified the PAHs: phenanthrene, benzo[g,h,i]perylene, pyrene and benzo[a]pyrene, besides the 3-pentadecilfenol allergen (urushiol analogue) as prevalent. Occupational exposure to PAHs was confirmed by the increase of urinary 1-hydroxypyrene levels and genotoxic effects were evidenced by the increase on micronuclei and nuclear bud frequency in exfoliated buccal mucosa cells among the exposed workers. Other biomarkers of effects such as karyorrhexis, pyknotic, karyolytic, condensed chromatin and binucleated cells also have their frequencies increased when compared to an unexposed control group. The investigation of the molecular mechanisms associated with the PM organic extract showed cytotoxicity in human lung cell lines (A549) at concentrations ≥ 4 nM BaPeq. Using non-cytotoxic doses the extract was able to activate proteins involved in the DNA damage response pathway (Chk1 and p53). Moreover, the specific contribution of the four most representative PAHs in the cashew nut roasting sample showed that benzo[a]pyrene was the most efficient to activate Chk1 and p53. Finally, the organic extract was able to increase persistently the mRNA expression involved in the PAHs metabolism (CYP1A1 and CYP1B1), inflammatory response (IL-8 and TNF-α) and cell cycle arrest (CDKN1A) for DNA repair (DDB2). The high PM concentrations and its biological effects associated warn of the serious harmful effects of artisanal cashew nut roasting and urgent actions should be taken to the sustainable development of this activity.
Resumo:
Brazil is among the largest cashew nut producers of the world. However, the roasting process is still carried out artisanally, especially in the Brazilian semiarid region. In face of this occupational problem, the aim of this study was to perform a physical-chemical characterization of the particulate matter (PM) emitted by the roasting of cashew nuts, as well as to determine the occupational risk and molecular mechanisms associated. The most evident PM characteristics were the prevalence of fine particles, typical biomass burning morphologies such as tar ball and the presence of the elements K, Cl, S, Ca and Fe. In addition, atmospheric modeling analyses suggest that these particles can reach neighboring regions of the emission source. Polycyclic aromatic hydrocarbons (PAHs) with carcinogenic potential, such as benzo[a]pyrene, dibenz[a,h]anthracene, benzo[a]anthracene, benzo[b]fluoranthene, chrysene, benzo[k]fluoranthene, indeno[1,2,3-c,d]pyrene and benzo[j]fluoranthene were the most abundant PAHs found in the two air monitoring campaigns. Among the identified oxy-PAH the benzanthrone (7H-benz[d,e]anthracen-7-one) had the highest concentration and the evaluation of lifetime cancer risk showed an increase of 12 to 37 cases of cancer for every 10,000 exposed people. Chemical analysis of roasted cashew nuts identified the PAHs: phenanthrene, benzo[g,h,i]perylene, pyrene and benzo[a]pyrene, besides the 3-pentadecilfenol allergen (urushiol analogue) as prevalent. Occupational exposure to PAHs was confirmed by the increase of urinary 1-hydroxypyrene levels and genotoxic effects were evidenced by the increase on micronuclei and nuclear bud frequency in exfoliated buccal mucosa cells among the exposed workers. Other biomarkers of effects such as karyorrhexis, pyknotic, karyolytic, condensed chromatin and binucleated cells also have their frequencies increased when compared to an unexposed control group. The investigation of the molecular mechanisms associated with the PM organic extract showed cytotoxicity in human lung cell lines (A549) at concentrations ≥ 4 nM BaPeq. Using non-cytotoxic doses the extract was able to activate proteins involved in the DNA damage response pathway (Chk1 and p53). Moreover, the specific contribution of the four most representative PAHs in the cashew nut roasting sample showed that benzo[a]pyrene was the most efficient to activate Chk1 and p53. Finally, the organic extract was able to increase persistently the mRNA expression involved in the PAHs metabolism (CYP1A1 and CYP1B1), inflammatory response (IL-8 and TNF-α) and cell cycle arrest (CDKN1A) for DNA repair (DDB2). The high PM concentrations and its biological effects associated warn of the serious harmful effects of artisanal cashew nut roasting and urgent actions should be taken to the sustainable development of this activity.
Resumo:
FtsZ, a bacterial tubulin homologue, is a cytoskeleton protein that plays key roles in cytokinesis of almost all prokaryotes. FtsZ assembles into protofilaments (pfs), one subunit thick, and these pfs assemble further to form a “Z ring” at the center of prokaryotic cells. The Z ring generates a constriction force on the inner membrane, and also serves as a scaffold to recruit cell-wall remodeling proteins for complete cell division in vivo. FtsZ can be subdivided into 3 main functional regions: globular domain, C terminal (Ct) linker, and Ct peptide. The globular domain binds GTP to assembles the pfs. The extreme Ct peptide binds membrane proteins to allow cytoplasmic FtsZ to function at the inner membrane. The Ct linker connects the globular domain and Ct peptide. In the present studies, we used genetic and structural approaches to investigate the function of Escherichia coli (E. coli) FtsZ. We sought to examine three questions: (1) Are lateral bonds between pfs essential for the Z ring? (2) Can we improve direct visualization of FtsZ in vivo by engineering an FtsZ-FP fusion that can function as the sole source of FtsZ for cell division? (3) Is the divergent Ct linker of FtsZ an intrinsically disordered peptide (IDP)?
One model of the Z ring proposes that pfs associate via lateral bonds to form ribbons; however, lateral bonds are still only hypothetical. To explore potential lateral bonding sites, we probed the surface of E. coli FtsZ by inserting either small peptides or whole FPs. Of the four lateral surfaces on FtsZ pfs, we obtained inserts on the front and back surfaces that were functional for cell division. We concluded that these faces are not sites of essential interactions. Inserts at two sites, G124 and R174 located on the left and right surfaces, completely blocked function, and were identified as possible sites for essential lateral interactions. Another goal was to find a location within FtsZ that supported fusion of FP reporter proteins, while allowing the FtsZ-FP to function as the sole source of FtsZ. We discovered one internal site, G55-Q56, where several different FPs could be inserted without impairing function. These FtsZ-FPs may provide advances for imaging Z-ring structure by super-resolution techniques.
The Ct linker is the most divergent region of FtsZ in both sequence and length. In E. coli FtsZ the Ct linker is 50 amino acids (aa), but for other FtsZ it can be as short as 37 aa or as long as 250 aa. The Ct linker has been hypothesized to be an IDP. In the present study, circular dichroism confirmed that isolated Ct linkers of E. coli (50 aa) and C. crescentus (175 aa) are IDPs. Limited trypsin proteolysis followed by mass spectrometry (LC-MS/MS) confirmed Ct linkers of E. coli (50 aa) and B. subtilis (47 aa) as IDPs even when still attached to the globular domain. In addition, we made chimeras, swapping the E. coli Ct linker for other peptides and proteins. Most chimeras allowed for normal cell division in E. coli, suggesting that IDPs with a length of 43 to 95 aa are tolerated, sequence has little importance, and electrostatic charge is unimportant. Several chimeras were purified to confirm the effect they had on pf assembly. We concluded that the Ct linker functions as a flexible tether allowing for force to be transferred from the FtsZ pf to the membrane to constrict the septum for division.
Resumo:
The presenilins are the catalytic component of the gamma-secretase protease complex, involved in the regulated intramembrane proteolysis of numerous type-1 transmembrane proteins, including Amyloid precursor protein (APP) and Notch. In addition to their role in the γ-secretase complex the presenilins are involved in a number of γ-secretase independent functions such as calcium homeostasis, apoptosis, inflammation and protein trafficking. Presenilin function is known to be regulated through posttranslational modifications like endoproteolysis, phosphorylation and ubiquitination. Using a bioinformatics and protein sequence analysis approach this lab has identified a putative ubiquitin binding CUE domain in the presenilins. The aim of this project was to characterise the function of the presenilin CUE domains. Firstly, the presenilins are shown to contain a functional ubiquitin-binding CUE domain that preferentially binds to K63-linked polyubiquitin chains. The PS1 CUE domain is shown to be dispensable for PS1 endoproteolysis and γ-secretase mediated cleavage of APP, Notch and IL-1R1. This suggests the PS1 CUE domain is involved in a γ-secretase independent PS1 function. Our hypothesis is that the PS1 CUE domain is involved in regulating PS1’s intermolecular protein-protein interactions or intramolecular PS1:PS1 interactions. Here the PS1 CUE domain is shown to be dispensable for the interaction of PS1 and the K63-linked polyubiquitinated PS1 interacting proteins P75NTR, IL-1R1, TRAF6, TRAF2 and RIP1. To further investigate PS1 CUE domain function a mass spectrometry proteomics based approach is used to identify PS1 CUE domain interacting proteins. This proteomics approach demonstrated that the PS1 CUE domain is not required for PS1 dimerization. Instead a number of proteins thatinteract with the PS1 CUE domain are identified as well as proteins whose interaction with PS1 is downregulated by the presence of the PS1 CUE domain. Bioinformatic analysis of these proteins suggests possible roles for the PS1 CUE domain in regulating cell signalling, ubiquitination or cellular trafficking.
Resumo:
The Amazon Basin plays key role in atmospheric chemistry, biodiversity and climate change. In this study we applied nanoelectrospray (nanoESI) ultra-high-resolution mass spectrometry (UHRMS) for the analysis of the organic fraction of PM2.5 aerosol samples collected during dry and wet seasons at a site in central Amazonia receiving background air masses, biomass burning and urban pollution. Comprehensive mass spectral data evaluation methods (e.g. Kendrick mass defect, Van Krevelen diagrams, carbon oxidation state and aromaticity equivalent) were used to identify compound classes and mass distributions of the detected species. Nitrogen- and/or sulfur-containing organic species contributed up to 60 % of the total identified number of formulae. A large number of molecular formulae in organic aerosol (OA) were attributed to later-generation nitrogen- and sulfur-containing oxidation products, suggesting that OA composition is affected by biomass burning and other, potentially anthropogenic, sources. Isoprene-derived organosulfate (IEPOX-OS) was found to be the most dominant ion in most of the analysed samples and strongly followed the concentration trends of the gas-phase anthropogenic tracers confirming its mixed anthropogenic–biogenic origin. The presence of oxidised aromatic and nitro-aromatic compounds in the samples suggested a strong influence from biomass burning especially during the dry period. Aerosol samples from the dry period and under enhanced biomass burning conditions contained a large number of molecules with high carbon oxidation state and an increased number of aromatic compounds compared to that from the wet period. The results of this work demonstrate that the studied site is influenced not only by biogenic emissions from the forest but also by biomass burning and potentially other anthropogenic emissions from the neighbouring urban environments.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
The oxygen minimum zone (OMZ) of the late Quaternary California margin experienced abrupt and dramatic changes in strength and depth in response to changes in intermediate water ventilation, ocean productivity, and climate at orbital through millennial time scales. Expansion and contraction of the OMZ is exhibited at high temporal resolution (107-126 year) by quantitative benthic foraminiferal assemblage changes in two piston cores forming a vertical profile in Santa Barbara Basin (569 m, basin floor; 481 m, near sill depth) to 34 and 24 ka, respectively. Variation in the OMZ is quantified by new benthic foraminiferal groupings and new dissolved oxygen index based on documented relations between species and water-mass oxygen concentrations. Foraminiferal-based paleoenvironmental assessments are integrated with principal component analysis, bioturbation, grain size, CaCO3, total organic carbon, and d13C to reconstruct basin oxygenation history. Fauna responded similarly between the two sites, although with somewhat different magnitude and taxonomic expression. During cool episodes (Younger Dryas and stadials), the water column was well oxygenated, most strongly near the end of the glacial episode (17-16 ka; Heinrich 1). In contrast, the OMZ was strong during warm episodes (Bølling/Allerød, interstadials, and Pre-Boreal). During the Bølling/Allerød, the OMZ shoaled to <360 m of contemporaneous sea level, its greatest vertical expansion of the last glacial cycle. Assemblages were then dominated by Bolivina tumida, reflecting high concentrations of dissolved methane in bottom waters. Short decadal intervals were so severely oxygen-depleted that no benthic foraminifera were present. The middle to late Holocene (6-0 ka) was less dysoxic than the early Holocene.
Resumo:
Composition and accumulation rates of organic carbon in Holocene sediments provided data to calculate an organic carbon budget for the Laptev Sea continental margin. Mean Holocene accumulation rates in the inner Laptev Sea vary between 0.14 and 2.7 g C cm**2/ky; maximum values occur close to the Lena River delta. Seawards, the mean accumulation rates decrease from 0.43 to 0.02 g C cm**2/ky. The organic matter is predominantly of terrigenous origin. About 0.9*10**6 t/year of organic carbon are buried in the Laptev Sea, and 0.25*10**6 t/year on the continental slope. Between about 8.5 and 9 ka, major changes in supply of terrigenous and marine organic carbon occur, related to changes in coastal erosion, Siberian river discharge, and/or Atlantic water inflow along the Eurasian continental margin.