875 resultados para Linear Multi-step Formulae
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
The human choriocarcinoma cell line JEG-3 is heterozygous at the adenosine deaminase (ADA) gene locus. Both allelic genes are under strong but incomplete repression causing a very low level expression of the gene locus. Because cytotoxic adenosine analogues such as 9-(beta)-D arabinofuranosyladenine (ara-A) and 9-(beta)-D xylofuranosyladenine (xyl-A) can be specifically detoxified by the action of ADA, these analogues were used to select for JEG-3 derived cells which had increased ADA expression. When JEG-3 cells were subjected to a multi-step, successively increasing dosage of either ara-A or xyl-A, resistant cells with increased ADA expression were generated. This increased ADA expression in the resistant cells was unstable, so that when the selective pressure was removed, cellular ADA expression would decrease. Subclone analysis of xyl-A resistant cells revealed that compared to parental JEG-3 cells, individual resistant cells had either elevated ADA levels or decreased adenosine kinase (ADK) levels or both. This altered ADA and ADK expression in the resistant cells were found to be independent events. Because of high endogenous tissue conversion factor (TCF) expression in the JEG-3 cells, the allelic nature of the increased ADA expression in most of the resistant cells could not be determined. However, several resistant subcloned cells were found to have lost TCF expression. These TCF('-) cells expressed only the ADA*2 allelic gene product. Cell fusion experiments demonstrated that the ADA*1 allelic gene was intact and functional in the A3-1A7 cell line. Chromosomal analysis of the A3-1A7 cells showed that they had no double-minutes or homogeneously staining chromosomal regions, although a pair of new chromosomes were found in these cells. Segregation analysis of the hybrid cells indicated that an ADA*2 allelic gene was probably located on this new chromosome. The analysis of the A3-1A7 cell line suggested that the expression of only ADA 2 in these cells was the result of possibly a cis-deregulation of the ADA gene locus or more probably an amplification of the ADA*2 allelic gene. Two effective positive selection systems for ADA('+) cells were also developed and tested. These selection systems should eventually lead to the isolation of the ADA gene.^
Resumo:
Recent data suggest that the generation of new lymphatic vessels (i.e. lymphangiogenesis) may be a rate-limiting step in the dissemination of tumor cells to regional lymph nodes. However, efforts to study the cellular and molecular interactions that take place between tumor cells and lymphatic endothelial cells have been limited due to a lack of lymphatic endothelial cell lines available for study. ^ I have used a microsurgical approach to establish conditionally immortalized lymphatic endothelial cell lines from the afferent mesenteric lymphatic vessels of mice. Characterization of lymphatic endothelial cells, and tumor-associated lymphatic vessels revealed high expression levels of VCAM-1, which is known to facilitate adhesion of some tumor cells to vascular endothelial cells. Further investigation revealed that murine melanoma cells selected for high expression of α4, a counter-receptor for VCAM-1, demonstrated enhanced adhesion to lymphatic endothelial cells in vitro, and increased tumorigenicity and lymphatic metastasis in vivo, despite similar lymphatic vessel numbers. ^ Next, I examined the effects of growth factors that regulate lymphangiogenesis, and report that several growth factors are capable of activating survival and proliferation pathways of lymphatic endothelial cells. The dual protein tyrosine kinase inhibitor AEE788 (EGFR and VEGFR-2) inhibited the activation of Akt and MAPK in lymphatic endothelial cells responding to multiple growth factors. Moreover, oral treatment of mice with AEE788 decreased lymphatic vessel density and production of lymphatic metastasis by human colon cancer cells growing in the cecum of nude mice. ^ In the last set of experiments, I investigated the surgical management of lymphatic metastasis using a novel model of sentinel lymphadenectomy in live mice bearing subcutaneous B16-BL6 melanoma. The data demonstrate that this procedure when combined with wide excision of the primary melanoma, significantly enhanced survival of syngeneic C57BL/6 mice. ^ Collectively, these results indicate that the production of lymphatic metastasis depends on lymphangiogenesis, tumor cell adhesion to lymphatic endothelial cells, and proliferation of tumor cells in lymph nodes. Thus, lymphatic metastasis is a multi-step, complex, and active process that depends upon multiple interactions between tumor cells and tumor associated lymphatic endothelial cells. ^
Resumo:
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a noninvasive technique for quantitative assessment of the integrity of blood-brain barrier and blood-spinal cord barrier (BSCB) in the presence of central nervous system pathologies. However, the results of DCE-MRI show substantial variability. The high variability can be caused by a number of factors including inaccurate T1 estimation, insufficient temporal resolution and poor contrast-to-noise ratio. My thesis work is to develop improved methods to reduce the variability of DCE-MRI results. To obtain fast and accurate T1 map, the Look-Locker acquisition technique was implemented with a novel and truly centric k-space segmentation scheme. In addition, an original multi-step curve fitting procedure was developed to increase the accuracy of T1 estimation. A view sharing acquisition method was implemented to increase temporal resolution, and a novel normalization method was introduced to reduce image artifacts. Finally, a new clustering algorithm was developed to reduce apparent noise in the DCE-MRI data. The performance of these proposed methods was verified by simulations and phantom studies. As part of this work, the proposed techniques were applied to an in vivo DCE-MRI study of experimental spinal cord injury (SCI). These methods have shown robust results and allow quantitative assessment of regions with very low vascular permeability. In conclusion, applications of the improved DCE-MRI acquisition and analysis methods developed in this thesis work can improve the accuracy of the DCE-MRI results.
Resumo:
It is well accepted that tumorigenesis is a multi-step procedure involving aberrant functioning of genes regulating cell proliferation, differentiation, apoptosis, genome stability, angiogenesis and motility. To obtain a full understanding of tumorigenesis, it is necessary to collect information on all aspects of cell activity. Recent advances in high throughput technologies allow biologists to generate massive amounts of data, more than might have been imagined decades ago. These advances have made it possible to launch comprehensive projects such as (TCGA) and (ICGC) which systematically characterize the molecular fingerprints of cancer cells using gene expression, methylation, copy number, microRNA and SNP microarrays as well as next generation sequencing assays interrogating somatic mutation, insertion, deletion, translocation and structural rearrangements. Given the massive amount of data, a major challenge is to integrate information from multiple sources and formulate testable hypotheses. This thesis focuses on developing methodologies for integrative analyses of genomic assays profiled on the same set of samples. We have developed several novel methods for integrative biomarker identification and cancer classification. We introduce a regression-based approach to identify biomarkers predictive to therapy response or survival by integrating multiple assays including gene expression, methylation and copy number data through penalized regression. To identify key cancer-specific genes accounting for multiple mechanisms of regulation, we have developed the integIRTy software that provides robust and reliable inferences about gene alteration by automatically adjusting for sample heterogeneity as well as technical artifacts using Item Response Theory. To cope with the increasing need for accurate cancer diagnosis and individualized therapy, we have developed a robust and powerful algorithm called SIBER to systematically identify bimodally expressed genes using next generation RNAseq data. We have shown that prediction models built from these bimodal genes have the same accuracy as models built from all genes. Further, prediction models with dichotomized gene expression measurements based on their bimodal shapes still perform well. The effectiveness of outcome prediction using discretized signals paves the road for more accurate and interpretable cancer classification by integrating signals from multiple sources.
Resumo:
Infection by human immunodeficiency virus type 1 (HIV-1) is a multi-step process, and detailed analyses of the various events critical for productive infection are necessary to clearly understanding the infection process and identifying novel targets for therapeutic interventions. Evidence from this study reveals binding of the viral envelope protein to host cell glycosphingolipids (GSLs) as a novel event necessary for the orderly progression of the host cell-entry and productive infection by HIV-1. Data obtained from co-immunoprecipitation analyses and confocal microscopy showed that the ability of viral envelope to interact with the co-receptor CXCR4 and productive infection of HIV-1 were inhibited in cells rendered GSL-deficient, while both these activities were restored after reconstitution of the cells with specific GSLs like GM3. Furthermore, evidence was obtained using peptide-inhibitors of HIV-1 infection to show that binding of a specific region within the V3-loop of the envelope protein gp120 to the host cell GSLs is the trigger necessary for the CD4-bound gp120 to recruit the CXCR4 co-receptor. Infection-inhibitory activity of the V3 peptides was compromised in GSL-deficient cells, but could be restored by reconstitution of GSLs. Based on these findings, a revised model for HIV-1 infection is proposed that accounts for the established interactions between the viral envelope and host cell receptors while enumerating the importance of the new findings that fill the gap in the current knowledge of the sequential events for the HIV-1 entry. According to this model, post-CD4 binding of the HIV-1 envelope surface protein gp120 to host cell GSLs, mediated by the gp120-V3 region, enables formation of the gp120-CD4-GSL-CXCR4 immune-complex and productive infection. The identification of cellular GSLs as an additional class of co-factors necessary for HIV-1 infection is important for enhancing the basic knowledge of the HIV-1 entry that can be exploited for developing novel antiviral therapeutic strategies. ^
Resumo:
En este proyecto se desarrolla un sistema electrónico para variar la geometría de un motor de un monoplaza que participa en la competición Fórmula SAE. Fórmula SAE es una competición de diseño de monoplazas para estudiantes, organizado por “Society of Automotive Enginners” (SAE). Este concurso busca la innovación tecnológica de la automoción, así como que estudiantes participen en un trabajo real, en el cual el objetivo es obtener resultados competitivos cumpliendo con una serie de requisitos. La variación de la geometría de un motor en un vehículo permite mejorar el rendimiento del monoplaza consiguiendo elevar el par de potencia del motor. Cualquier mejora en del vehículo en un ámbito de competición puede resultar determinante en el desenlace de la misma. El objetivo del proyecto es realizar esta variación mediante el control de la longitud de los tubos de admisión de aire o “runners” del motor de combustión, empleando un motor lineal paso a paso. A partir de la información obtenida por sensores de revoluciones del motor de combustión y la posición del acelerador se debe controlar la distancia de dichos tubos. Integrando este sistema en el bus CAN del vehículo para que comparta la información medida al resto de módulos. Por todo esto se realiza un estudio aclarando los aspectos generales del objetivo del trabajo, para la comprensión del proyecto a realizar, las posibilidades de realización y adquisición de conocimientos para un mejor desarrollo. Se presenta una solución basada en el control del motor lineal paso a paso mediante el microcontrolador PIC32MX795F512-L. Dispositivo del fabricante Microchip con una arquitectura de 32 bits. Este dispone de un módulo CAN integrado y distintos periféricos que se emplean en la medición de los sensores y actuación sobre el motor paso a paso empleando el driver de Texas Instruments DRV8805. Entonces el trabajo se realiza en dos líneas, una parte software de programación del control del sistema, empleando el software de Microchip MPLABX IDE y otra parte hardware de diseño de una PCB y circuitos acondicionadores para la conexión del microcontrolador, con los sensores, driver, motor paso a paso y bus CAN. El software empleado para la realización de la PCB es Orcad9.2/Layout. Para la evaluación de las medidas obtenidas por los sensores y la comprobación del bus CAN se emplea el kit de desarrollo de Microchip, MCP2515 CAN Bus Monitor Demo Board, que permite ver la información en el bus CAN e introducir tramas al mismo. ABSTRACT. This project develops an electronic system to vary the geometry of a car engine which runs the Formula SAE competition. Formula SAE is a design car competition for students, organized by "Society of Automotive Engineers" (SAE). This competition seeks technological innovation in the automotive industry and brings in students to participate in a real job, in which the objective is to obtain competitive results in compliance with certain requirements. Varying engine’s geometry in a vehicle improves car’s performance raising engine output torque. Any improvement in the vehicle in a competition field can be decisive in the outcome of it. The goal of the project is the variation by controlling the length of the air intake pipe or "runners" in a combustion engine, using a linear motor step. For these, uses the information gathered by speed sensors from the combustion engine and by the throttle position to control the distance of these tubes. This system is integrated in the vehicle CAN bus to share the information with the other modules. For all this is made a study to clarify the general aspects of the project in order to understand the activities developed inside the project, the different options available and also, to acquire knowledge for a better development of the project. The solution is based on linear stepper motor control by the microcontroller PIC32MX795F512-L. Device from manufacturer Microchip with a 32-bit architecture. This module has an integrated CAN various peripherals that are used in measuring the performance of the sensors and drives the stepper motor using Texas Instruments DRV8805 driver. Then the work is done in two lines, first, control programming software system using software MPLABX Microchip IDE and, second, hardware design of a PCB and conditioning circuits for connecting the microcontroller, with sensors, driver stepper motor and CAN bus. The software used to carry out the PCB is Orcad9.2/Layout. For the evaluation of the measurements obtained by the sensors and CAN bus checking is used Microchip development kit, MCP2515 CAN Bus Monitor Demo Board, that allows you to see the information on the CAN bus and enter new frames in the bus.
Resumo:
We demonstrate generating complete and playable card games using evolutionary algorithms. Card games are represented in a previously devised card game description language, a context-free grammar. The syntax of this language allows us to use grammar-guided genetic programming. Candidate card games are evaluated through a cascading evaluation function, a multi-step process where games with undesired properties are progressively weeded out. Three representa- tive examples of generated games are analysed. We observed that these games are reasonably balanced and have skill ele- ments, they are not yet entertaining for human players. The particular shortcomings of the examples are discussed in re- gard to the generative process to be able to generate quality games
Resumo:
This paper proposes a novel combination of artificial intelligence planning and other techniques for improving decision-making in the context of multi-step multimedia content adaptation. In particular, it describes a method that allows decision-making (selecting the adaptation to perform) in situations where third-party pluggable multimedia conversion modules are involved and the multimedia adaptation planner does not know their exact adaptation capabilities. In this approach, the multimedia adaptation planner module is only responsible for a part of the required decisions; the pluggable modules make additional decisions based on different criteria. We demonstrate that partial decision-making is not only attainable, but also introduces advantages with respect to a system in which these conversion modules are not capable of providing additional decisions. This means that transferring decisions from the multi-step multimedia adaptation planner to the pluggable conversion modules increases the flexibility of the adaptation. Moreover, by allowing conversion modules to be only partially described, the range of problems that these modules can address increases, while significantly decreasing both the description length of the adaptation capabilities and the planning decision time. Finally, we specify the conditions under which knowing the partial adaptation capabilities of a set of conversion modules will be enough to compute a proper adaptation plan.
Resumo:
Molecular analysis of invasive breast cancer and its precursors has furthered our understanding of breast cancer progression. In the past few years, new multi-step pathways of breast cancer progression have been delineated through genotypic-phenotypic correlations. Nuclear grade, more than any other pathological feature, is strongly associated with the number and pattern of molecular genetic abnormalities in breast cancer cells. Thus, there are two distinct major pathways to the evolution of low- and high-grade invasive carcinomas: whilst the former consistently show oestrogen receptor (ER) and progesterone receptor (PgR) positivity and 16q loss, the latter are usually ER/PgR-negative and show Her-2 over-expression/amplification and complex karyotypes. The boundaries between the evolutionary pathways of well-differentiated/low-grade ductal and lobular carcinomas have been blurred, with changes in E-cadherin expression being one of the few distinguishing features between the two. In addition, lesions long thought to be precursors of breast carcinomas, such as hyperplasia of usual type, are currently considered mere risk indicators, whilst columnar cell lesions are now implicated as non-obligate precursors of atypical ductal hyperplasia (ADH) and well-differentiated ductal carcinoma in situ (DCIS). However, only through the combination of comprehensive morphological analysis and cutting-edge molecular tools can this knowledge be translated into clinical practice and patient management. Copyright (C) 2005 Pathological Society of Great Britain and Ireland. Published by John Wiley Sons, Ltd.
Resumo:
A novel one pot process has been developed for the preparation of PbS nanocrystals in the conjugated polymer poly 2-methoxy,5-(2 ethyl-hexyloxy-p-phenylenevinylene) (MEH-PPV). Current techniques for making such composite materials rely upon synthesizing the nanocrystals and conducting polymer separately, and subsequently mixing them. This multi-step technique has two serious drawbacks: templating surfactant must be removed before mixing, and co-solvent incompatibility causes aggregation. In our method, we eliminate the need for an initial surfactant by using the conducting polymer to terminate and template nanocrystal growth. Additionally, the final product is soluble in a single solvent. We present materials analysis which shows PbS nanocrystals can be grown directly in a conducting polymer, the resulting composite is highly ordered and nanocrystal size can be controlled.
Resumo:
Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. In this paper, we investigate the problem of evaluating the top k distinguished “features” for a “cluster” based on weighted proximity relationships between the cluster and features. We measure proximity in an average fashion to address possible nonuniform data distribution in a cluster. Combining a standard multi-step paradigm with new lower and upper proximity bounds, we presented an efficient algorithm to solve the problem. The algorithm is implemented in several different modes. Our experiment results not only give a comparison among them but also illustrate the efficiency of the algorithm.
Resumo:
Algae are a new potential biomass for energy production but there is limited information on their pyrolysis and kinetics. The main aim of this thesis is to investigate the pyrolytic behaviour and kinetics of Chlorella vulgaris, a green microalga. Under pyrolysis conditions, these microalgae show their comparable capabilities to terrestrial biomass for energy and chemicals production. Also, the evidence from a preliminary pyrolysis by the intermediate pilot-scale reactor supports the applicability of these microalgae in the existing pyrolysis reactor. Thermal decomposition of Chlorella vulgaris occurs in a wide range of temperature (200-550°C) with multi-step reactions. To evaluate the kinetic parameters of their pyrolysis process, two approaches which are isothermal and non-isothermal experiments are applied in this work. New developed Pyrolysis-Mass Spectrometry (Py-MS) technique has the potential for isothermal measurements with a short run time and small sample size requirement. The equipment and procedure are assessed by the kinetic evaluation of thermal decomposition of polyethylene and lignocellulosic derived materials (cellulose, hemicellulose, and lignin). In the case of non-isothermal experiment, Thermogravimetry- Mass Spectrometry (TG-MS) technique is used in this work. Evolved gas analysis provides the information on the evolution of volatiles and these data lead to a multi-component model. Triplet kinetic values (apparent activation energy, pre-exponential factor, and apparent reaction order) from isothermal experiment are 57 (kJ/mol), 5.32 (logA, min-1), 1.21-1.45; 9 (kJ/mol), 1.75 (logA, min-1), 1.45 and 40 (kJ/mol), 3.88 (logA, min-1), 1.45- 1.15 for low, middle and high temperature region, respectively. The kinetic parameters from non-isothermal experiment are varied depending on the different fractions in algal biomass when the range of apparent activation energies are 73-207 (kJ/mol); pre-exponential factor are 5-16 (logA, min-1); and apparent reaction orders are 1.32–2.00. The kinetic procedures reported in this thesis are able to be applied to other kinds of biomass and algae for future works.
Resumo:
Biomass-To-Liquid (BTL) is one of the most promising low carbon processes available to support the expanding transportation sector. This multi-step process produces hydrocarbon fuels from biomass, the so-called “second generation biofuels” that, unlike first generation biofuels, have the ability to make use of a wider range of biomass feedstock than just plant oils and sugar/starch components. A BTL process based on gasification has yet to be commercialized. This work focuses on the techno-economic feasibility of nine BTL plants. The scope was limited to hydrocarbon products as these can be readily incorporated and integrated into conventional markets and supply chains. The evaluated BTL systems were based on pressurised oxygen gasification of wood biomass or bio-oil and they were characterised by different fuel synthesis processes including: Fischer-Tropsch synthesis, the Methanol to Gasoline (MTG) process and the Topsoe Integrated Gasoline (TIGAS) synthesis. This was the first time that these three fuel synthesis technologies were compared in a single, consistent evaluation. The selected process concepts were modelled using the process simulation software IPSEpro to determine mass balances, energy balances and product distributions. For each BTL concept, a cost model was developed in MS Excel to estimate capital, operating and production costs. An uncertainty analysis based on the Monte Carlo statistical method, was also carried out to examine how the uncertainty in the input parameters of the cost model could affect the output (i.e. production cost) of the model. This was the first time that an uncertainty analysis was included in a published techno-economic assessment study of BTL systems. It was found that bio-oil gasification cannot currently compete with solid biomass gasification due to the lower efficiencies and higher costs associated with the additional thermal conversion step of fast pyrolysis. Fischer-Tropsch synthesis was the most promising fuel synthesis technology for commercial production of liquid hydrocarbon fuels since it achieved higher efficiencies and lower costs than TIGAS and MTG. None of the BTL systems were competitive with conventional fossil fuel plants. However, if government tax take was reduced by approximately 33% or a subsidy of £55/t dry biomass was available, transport biofuels could be competitive with conventional fuels. Large scale biofuel production may be possible in the long term through subsidies, fuels price rises and legislation.
Resumo:
A continuous multi-step synthesis of 1,2-diphenylethane was performed sequentially in a structured compact reactor. This process involved a Heck C-C coupling reaction followed by the addition of hydrogen to perform reduction of the intermediate obtained in the first step. Both of the reactions were catalysed by microspherical carbon-supported Pd catalysts. Due to the integration of the micro-heat exchanger, the static mixer and the mesoscale packed-bed reaction channel, the compact reactor was proven to be an intensified tool for promoting the reactions. In comparison with the batch reactor, this flow process in the compact reactor was more efficient as: (i) the reaction time was significantly reduced (ca. 7 min versus several hours), (ii) no additional ligands were used and (iii) the reaction was run at lower operational pressure and temperature. Pd leached in the Heck reaction step was shown to be effectively recovered in the following hydrogenation reaction section and the catalytic activity of the system can be mostly retained by reverse flow operation. © 2009 Elsevier Inc. All rights reserved.