989 resultados para Programming, Linear, utilization
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The steel industry produces, besides steel, also solid mineral by-products or slags, while it emits large quantities of carbon dioxide (CO2). Slags consist of various silicates and oxides which are formed in chemical reactions between the iron ore and the fluxing agents during the high temperature processing at the steel plant. Currently, these materials are recycled in the ironmaking processes, used as aggregates in construction, or landfilled as waste. The utilization rate of the steel slags can be increased by selectively extracting components from the mineral matrix. As an example, aqueous solutions of ammonium salts such as ammonium acetate, chloride and nitrate extract calcium quite selectively already at ambient temperature and pressure conditions. After the residual solids have been separated from the solution, calcium carbonate can be precipitated by feeding a CO2 flow through the solution. Precipitated calcium carbonate (PCC) is used in different applications as a filler material. Its largest consumer is the papermaking industry, which utilizes PCC because it enhances the optical properties of paper at a relatively low cost. Traditionally, PCC is manufactured from limestone, which is first calcined to calcium oxide, then slaked with water to calcium hydroxide and finally carbonated to PCC. This process emits large amounts of CO2, mainly because of the energy-intensive calcination step. This thesis presents research work on the scale-up of the above-mentioned ammonium salt based calcium extraction and carbonation method, named Slag2PCC. Extending the scope of the earlier studies, it is now shown that the parameters which mainly affect the calcium utilization efficiency are the solid-to-liquid ratio of steel slag and the ammonium salt solvent solution during extraction, the mean diameter of the slag particles, and the slag composition, especially the fractions of total calcium, silicon, vanadium and iron as well as the fraction of free calcium oxide. Regarding extraction kinetics, slag particle size, solid-to-liquid ratio and molar concentration of the solvent solution have the largest effect on the reaction rate. Solvent solution concentrations above 1 mol/L NH4Cl cause leaching of other elements besides calcium. Some of these such as iron and manganese result in solution coloring, which can be disadvantageous for the quality of the PCC product. Based on chemical composition analysis of the produced PCC samples, however, the product quality is mainly similar as in commercial products. Increasing the novelty of the work, other important parameters related to assessment of the PCC quality, such as particle size distribution and crystal morphology are studied as well. As in traditional PCC precipitation process, the ratio of calcium and carbonate ions controls the particle shape; a higher value for [Ca2+]/[CO32-] prefers precipitation of calcite polymorph, while vaterite forms when carbon species are present in excess. The third main polymorph, aragonite, is only formed at elevated temperatures, above 40-50 °C. In general, longer precipitation times cause transformation of vaterite to calcite or aragonite, but also result in particle agglomeration. The chemical equilibrium of ammonium and calcium ions and dissolved ammonia controlling the solution pH affects the particle sizes, too. Initial pH of 12-13 during the carbonation favors nonagglomerated particles with a diameter of 1 μm and smaller, while pH values of 9-10 generate more agglomerates of 10-20 μm. As a part of the research work, these findings are implemented in demonstrationscale experimental process setups. For the first time, the Slag2PCC technology is tested in scale of ~70 liters instead of laboratory scale only. Additionally, design of a setup of several hundreds of liters is discussed. For these purposes various process units such as inclined settlers and filters for solids separation, pumps and stirrers for material transfer and mixing as well as gas feeding equipment are dimensioned and developed. Overall emissions reduction of the current industrial processes and good product quality as the main targets, based on the performed partial life cycle assessment (LCA), it is most beneficial to utilize low concentration ammonium salt solutions for the Slag2PCC process. In this manner the post-treatment of the products does not require extensive use of washing and drying equipment, otherwise increasing the CO2 emissions of the process. The low solvent concentration Slag2PCC process causes negative CO2 emissions; thus, it can be seen as a carbon capture and utilization (CCU) method, which actually reduces the anthropogenic CO2 emissions compared to the alternative of not using the technology. Even if the amount of steel slag is too small for any substantial mitigation of global warming, the process can have both financial and environmental significance for individual steel manufacturers as a means to reduce the amounts of emitted CO2 and landfilled steel slag. Alternatively, it is possible to introduce the carbon dioxide directly into the mixture of steel slag and ammonium salt solution. The process would generate a 60-75% pure calcium carbonate mixture, the remaining 25-40% consisting of the residual steel slag. This calcium-rich material could be re-used in ironmaking as a fluxing agent instead of natural limestone. Even though this process option would require less process equipment compared to the Slag2PCC process, it still needs further studies regarding the practical usefulness of the products. Nevertheless, compared to several other CO2 emission reduction methods studied around the world, the within this thesis developed and studied processes have the advantage of existing markets for the produced materials, thus giving also a financial incentive for applying the technology in practice.
Resumo:
We studied the community and habitat occupation of epiphytes to understand how these plants cope with a supposedly stressful habitat: i) how general epiphytes occupy tree trunks, ii) how epiphytic bromeliads, occupy their supportive trees, iii) how CAM bromeliads are spatially distributed. The study was done in the dry forest of Jacarepiá, State of Rio de Janeiro. Data collection on epiphytes, phorophytes, and trees was based on the point-center quarter method. The photosynthetic pathway of the bromeliad species was determined using isotope ratio mass spectrometry. The presence of Gesneriaceae, Araceae, and Cactaceae indicates that some humidity is present in the area allowing the presence of supposedly less-specialized epiphytes. There was no correlation between epiphyte abundance and phorophyte diameter, and phorophytes had larger sizes than trees that do not host epiphytes. There was correlation between tree diameter and bromeliad abundance, and lack of correlation between diameter and bromeliad richness. Only one species was typical of the understorey and one was typical of the canopy, while intermediate heights were occupied by different species. The only C3 bromeliad species (Vriesea procera (Mart. ex Schult.f.) Wittm.) was significantly more exposed than the other species. If CAM occurrence is related to water economy, the fact that a C3 species is subjected to more exposed conditions is remarkable. Further comments are presented on the proportion between CAM bromeliad species and abundance in dry forest. Regarding life forms, holoepiphytes, as opposed to hemiepiphytes, showed not to be restricted by the phorophyte's diameter suggesting a more successful establishment of this life form.
Resumo:
Data of corn ear production (kg/ha) of 196 half-sib progenies (HSP) of the maize population CMS-39 obtained from experiments carried out in four environments were used to adapt and assess the BLP method (best linear predictor) in comparison with to the selection among and within half-sib progenies (SAWHSP). The 196 HSP of the CMS-39 population developed by the National Center for Maize and Sorghum Research (CNPMS-EMBRAPA) were related through their pedigree with the recombined progenies of the previous selection cycle. The two methodologies used for the selection of the twenty best half-sib progenies, BLP and SAWHSP, led to similar expected genetic gains. There was a tendency in the BLP methodology to select a greater number of related progenies because of the previous generation (pedigree) than the other method. This implies that greater care with the effective size of the population must be taken with this method. The SAWHSP methodology was efficient in isolating the additive genetic variance component from the phenotypic component. The pedigree system, although unnecessary for the routine use of the SAWHSP methodology, allowed the prediction of an increase in the inbreeding of the population in the long term SAWHSP selection when recombination is simultaneous to creation of new progenies.
Resumo:
We present an ultrastructural study of the utilization of human amniotic membrane in the treatment of congenital absence of the vagina in 10 patients. All patients were surgically treated with application of an amniotic membrane graft using the modified McIndoe and Bannister technique. Sixty days after surgery, samples of the vaginal neo-epithelium were collected for transmission electron microscopy analysis. The ultrastructural findings consisted of a lining of mature squamous epithelium indicating the occurrence of metaplasia of the amniotic epithelium into the vaginal epithelium. The cells were arranged in layers as in the normal vaginal epithelium, i.e., superficial, intermediate and deep layers. There were desmosomes and cytoplasmic intermediate cytokeratin filaments, as well as some remnant features of the previous amniotic epithelium. These findings suggest that human amniotic membrane is able to complete metaplasia into squamous cells but the mechanism of this cellular transformation is unknown
Resumo:
Two different pathogenetic mechanisms are proposed for colorectal cancers. One, the so-called "classic pathway", is the most common and depends on multiple additive mutational events (germline and/or somatic) in tumor suppressor genes and oncogenes, frequently involving chromosomal deletions in key genomic regions. Methodologically this pathway is recognizable by the phenomenon of loss of heterozygosity. On the other hand, the "mutator pathway" depends on early mutational loss of the mismatch repair system (germline and/or somatic) leading to accelerated accumulation of gene mutations in critical target genes and progression to malignancy. Methodologically this second pathway is recognizable by the phenomenon of microsatellite instability. The distinction between these pathways seems to be more than academic since there is evidence that the tumors emerging from the mutator pathway have a better prognosis. We report here a very simple methodology based on a set of tri-, tetra- and pentanucleotide repeat microsatellites allowing the simultaneous study of microsatellite instability and loss of heterozygosity which could allocate 70% of the colorectal tumors to the classic or the mutator pathway. The ease of execution of the methodology makes it suitable for routine clinical typing
Resumo:
Environmental issues, including global warming, have been serious challenges realized worldwide, and they have become particularly important for the iron and steel manufacturers during the last decades. Many sites has been shut down in developed countries due to environmental regulation and pollution prevention while a large number of production plants have been established in developing countries which has changed the economy of this business. Sustainable development is a concept, which today affects economic growth, environmental protection, and social progress in setting up the basis for future ecosystem. A sustainable headway may attempt to preserve natural resources, recycle and reuse materials, prevent pollution, enhance yield and increase profitability. To achieve these objectives numerous alternatives should be examined in the sustainable process design. Conventional engineering work cannot address all of these substitutes effectively and efficiently to find an optimal route of processing. A systematic framework is needed as a tool to guide designers to make decisions based on overall concepts of the system, identifying the key bottlenecks and opportunities, which lead to an optimal design and operation of the systems. Since the 1980s, researchers have made big efforts to develop tools for what today is referred to as Process Integration. Advanced mathematics has been used in simulation models to evaluate various available alternatives considering physical, economic and environmental constraints. Improvements on feed material and operation, competitive energy market, environmental restrictions and the role of Nordic steelworks as energy supplier (electricity and district heat) make a great motivation behind integration among industries toward more sustainable operation, which could increase the overall energy efficiency and decrease environmental impacts. In this study, through different steps a model is developed for primary steelmaking, with the Finnish steel sector as a reference, to evaluate future operation concepts of a steelmaking site regarding sustainability. The research started by potential study on increasing energy efficiency and carbon dioxide reduction due to integration of steelworks with chemical plants for possible utilization of available off-gases in the system as chemical products. These off-gases from blast furnace, basic oxygen furnace and coke oven furnace are mainly contained of carbon monoxide, carbon dioxide, hydrogen, nitrogen and partially methane (in coke oven gas) and have proportionally low heating value but are currently used as fuel within these industries. Nonlinear optimization technique is used to assess integration with methanol plant under novel blast furnace technologies and (partially) substitution of coal with other reducing agents and fuels such as heavy oil, natural gas and biomass in the system. Technical aspect of integration and its effect on blast furnace operation regardless of capital expenditure of new operational units are studied to evaluate feasibility of the idea behind the research. Later on the concept of polygeneration system added and a superstructure generated with alternative routes for off-gases pretreatment and further utilization on a polygeneration system producing electricity, district heat and methanol. (Vacuum) pressure swing adsorption, membrane technology and chemical absorption for gas separation; partial oxidation, carbon dioxide and steam methane reforming for methane gasification; gas and liquid phase methanol synthesis are the main alternative process units considered in the superstructure. Due to high degree of integration in process synthesis, and optimization techniques, equation oriented modeling is chosen as an alternative and effective strategy to previous sequential modelling for process analysis to investigate suggested superstructure. A mixed integer nonlinear programming is developed to study behavior of the integrated system under different economic and environmental scenarios. Net present value and specific carbon dioxide emission is taken to compare economic and environmental aspects of integrated system respectively for different fuel systems, alternative blast furnace reductants, implementation of new blast furnace technologies, and carbon dioxide emission penalties. Sensitivity analysis, carbon distribution and the effect of external seasonal energy demand is investigated with different optimization techniques. This tool can provide useful information concerning techno-environmental and economic aspects for decision-making and estimate optimal operational condition of current and future primary steelmaking under alternative scenarios. The results of the work have demonstrated that it is possible in the future to develop steelmaking towards more sustainable operation.
Resumo:
This article reports on the design and characteristics of substrate mimetics in protease-catalyzed reactions. Firstly, the basis of protease-catalyzed peptide synthesis and the general advantages of substrate mimetics over common acyl donor components are described. The binding behavior of these artificial substrates and the mechanism of catalysis are further discussed on the basis of hydrolysis, acyl transfer, protein-ligand docking, and molecular dynamics studies on the trypsin model. The general validity of the substrate mimetic concept is illustrated by the expansion of this strategy to trypsin-like, glutamic acid-specific, and hydrophobic amino acid-specific proteases. Finally, opportunities for the combination of the substrate mimetic strategy with the chemical solid-phase peptide synthesis and the use of substrate mimetics for non-peptide organic amide synthesis are presented.
Resumo:
Within the complex cellular arrangement found in the bone marrow stroma there exists a subset of nonhematopoietic cells referred to as mesenchymal progenitor cells (MPC). These cells can be expanded ex vivo and induced, either in vitro or in vivo, to terminally differentiate into at least seven types of cells: osteocytes, chondrocytes, adipocytes, tenocytes, myotubes, astrocytes and hematopoietic-supporting stroma. This broad multipotentiality, the feasibility to obtain MPC from bone marrow, cord and peripheral blood and their transplantability support the impact that the use of MPC will have in clinical settings. However, a number of fundamental questions about the cellular and molecular biology of MPC still need to be resolved before these cells can be used for safe and effective cell and gene therapies intended to replace, repair or enhance the physiological function of the mesenchymal and/or hematopoietic systems.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
Hydrolysis of D-valyl-L-leucyl-L-arginine p-nitroanilide (7.5-90.0 µM) by human tissue kallikrein (hK1) (4.58-5.27 nM) at pH 9.0 and 37ºC was studied in the absence and in the presence of increasing concentrations of 4-aminobenzamidine (96-576 µM), benzamidine (1.27-7.62 mM), 4-nitroaniline (16.5-66 µM) and aniline (20-50 mM). The kinetic parameters determined in the absence of inhibitors were: Km = 12.0 ± 0.8 µM and k cat = 48.4 ± 1.0 min-1. The data indicate that the inhibition of hK1 by 4-aminobenzamidine and benzamidine is linear competitive, while the inhibition by 4-nitroaniline and aniline is linear mixed, with the inhibitor being able to bind both to the free enzyme with a dissociation constant Ki yielding an EI complex, and to the ES complex with a dissociation constant Ki', yielding an ESI complex. The calculated Ki values for 4-aminobenzamidine, benzamidine, 4-nitroaniline and aniline were 146 ± 10, 1,098 ± 91, 38.6 ± 5.2 and 37,340 ± 5,400 µM, respectively. The calculated Ki' values for 4-nitroaniline and aniline were 289.3 ± 92.8 and 310,500 ± 38,600 µM, respectively. The fact that Ki'>Ki indicates that 4-nitroaniline and aniline bind to a second binding site in the enzyme with lower affinity than they bind to the active site. The data about the inhibition of hK1 by 4-aminobenzamidine and benzamidine help to explain previous observations that esters, anilides or chloromethyl ketone derivatives of Nalpha-substituted arginine are more sensitive substrates or inhibitors of hK1 than the corresponding lysine compounds.
Resumo:
Concentrated solar power (CSP) is a renewable energy technology, which could contribute to overcoming global problems related to pollution emissions and increasing energy demand. CSP utilizes solar irradiation, which is a variable source of energy. In order to utilize CSP technology in energy production and reliably operate a solar field including thermal energy storage system, dynamic simulation tools are needed in order to study the dynamics of the solar field, to optimize production and develop control systems. The object of this Master’s Thesis is to compare different concentrated solar power technologies and configure a dynamic solar field model of one selected CSP field design in the dynamic simulation program Apros, owned by VTT and Fortum. The configured model is based on German Novatec Solar’s linear Fresnel reflector design. Solar collector components including dimensions and performance calculation were developed, as well as a simple solar field control system. The preliminary simulation results of two simulation cases under clear sky conditions were good; the desired and stable superheated steam conditions were maintained in both cases, while, as expected, the amount of steam produced was reduced in the case having lower irradiation conditions. As a result of the model development process, it can be concluded, that the configured model is working successfully and that Apros is a very capable and flexible tool for configuring new solar field models and control systems and simulating solar field dynamic behaviour.
Resumo:
Trehalose biosynthesis and its hydrolysis have been extensively studied in yeast, but few reports have addressed the catabolism of exogenously supplied trehalose. Here we report the catabolism of exogenous trehalose by Candida utilis. In contrast to the biphasic growth in glucose, the growth of C. utilis in a mineral medium with trehalose as the sole carbon and energy source is aerobic and exhibits the Kluyver effect. Trehalose is transported into the cell by an inducible trehalose transporter (K M of 8 mM and V MAX of 1.8 µmol trehalose min-1 mg cell (dry weight)-1. The activity of the trehalose transporter is high in cells growing in media containing trehalose or maltose and very low or absent during the growth in glucose or glycerol. Similarly, total trehalase activity was increased from about 1.0 mU/mg protein in cells growing in glucose to 39.0 and 56.2 mU/mg protein in cells growing in maltose and trehalose, respectively. Acidic and neutral trehalase activities increased during the growth in trehalose, with neutral trehalase contributing to about 70% of the total activity. In addition to the increased activities of the trehalose transporter and trehalases, growth in trehalose promoted the increase in the activity of alpha-glucosidase and the maltose transporter. These results clearly indicate that maltose and trehalose promote the increase of the enzymatic activities necessary to their catabolism but are also able to stimulate each other's catabolism, as reported to occur in Escherichia coli. We show here for the first time that trehalose induces the catabolism of maltose in yeast.
Resumo:
This thesis studied the performance of Advanced metering infrastructure systems in a challenging Demand Response environment. The aim was to find out what kind of challenges and bottlenecks could be met when utilizing AMI-systems in challenging Demand Response tasks. To find out the challenges and bottlenecks, a multilayered demand response service concept was formed. The service consists of seven different market layers which consist of Nordic electricity market and the reserve markets of Fingrid. In the simulations the AMI-systems were benchmarked against these seven market layers. It was found out, that the current generation AMI-systems were capable of delivering Demand Response on the most challenging market layers, when observed from time critical viewpoint. Additionally, it was found out, that to enable wide scale Demand Response there are three major challenges to be acknowledged. The challenges hindering the utilization of wide scale Demand Response were related to poor standardization of the systems in use, possible problems in data connectivity solutions and the current electricity market regulation model.