991 resultados para Flat industrial modeling
Resumo:
This paper presents new estimates of total factor productivity growth in Britain for the period 1770-1860. We use a dual technique recently popularized by Hsieh (1999), and argue that the estimates we derive from factor prices are of similar quality to quantity-based calculations. Our results provide further evidence, derived from this independent set of sources, that productivity growth during the British Industrial Revolution was relatively slow. During the years 1770-1800, TFP growth was close to zero, according to our estimates. The period 1800-1830 experienced an acceleration of productivity growth. The Crafts-Harley view of the Industrial Revolution is thus reinforced. We also consider alternative explanations of slow productivity growth, and reject the interpretation that focuses on the introduction of steam as a general purpose technology.
Resumo:
The past four decades have witnessed an explosive growth in the field of networkbased facility location modeling. This is not at all surprising since location policy is one of the most profitable areas of applied systems analysis in regional science and ample theoretical and applied challenges are offered. Location-allocation models seek the location of facilities and/or services (e.g., schools, hospitals, and warehouses) so as to optimize one or several objectives generally related to the efficiency of the system or to the allocation of resources. This paper concerns the location of facilities or services in discrete space or networks, that are related to the public sector, such as emergency services (ambulances, fire stations, and police units), school systems and postal facilities. The paper is structured as follows: first, we will focus on public facility location models that use some type of coverage criterion, with special emphasis in emergency services. The second section will examine models based on the P-Median problem and some of the issues faced by planners when implementing this formulation in real world locational decisions. Finally, the last section will examine new trends in public sector facility location modeling.
Resumo:
This paper discusses the implications of using genetically modified crops to biomanufacture pharmaceuticals and industrial compounds from the perspective of their co-existence with conventional agriculture. Such plant-made pharmaceuticals and plantmade industrial products rely on exciting scientific and technological breakthroughs and promise new opportunities for the agricultural sector, but they also entail novel risks. The management of the externalities and of the possible unintended economic effects that arise in this context is critical and poses difficult questions for regulators.
Resumo:
We implemented Biot-type porous wave equations in a pseudo-spectral numerical modeling algorithm for the simulation of Stoneley waves in porous media. Fourier and Chebyshev methods are used to compute the spatial derivatives along the horizontal and vertical directions, respectively. To prevent from overly short time steps due to the small grid spacing at the top and bottom of the model as a consequence of the Chebyshev operator, the mesh is stretched in the vertical direction. As a large benefit, the Chebyshev operator allows for an explicit treatment of interfaces. Boundary conditions can be implemented with a characteristics approach. The characteristic variables are evaluated at zero viscosity. We use this approach to model seismic wave propagation at the interface between a fluid and a porous medium. Each medium is represented by a different mesh and the two meshes are connected through the above described characteristics domain-decomposition method. We show an experiment for sealed pore boundary conditions, where we first compare the numerical solution to an analytical solution. We then show the influence of heterogeneity and viscosity of the pore fluid on the propagation of the Stoneley wave and surface waves in general.
Resumo:
This paper presents a review of methodology for semi-supervised modeling with kernel methods, when the manifold assumption is guaranteed to be satisfied. It concerns environmental data modeling on natural manifolds, such as complex topographies of the mountainous regions, where environmental processes are highly influenced by the relief. These relations, possibly regionalized and nonlinear, can be modeled from data with machine learning using the digital elevation models in semi-supervised kernel methods. The range of the tools and methodological issues discussed in the study includes feature selection and semisupervised Support Vector algorithms. The real case study devoted to data-driven modeling of meteorological fields illustrates the discussed approach.
Resumo:
O presente trabalho foi desenvolvido numa indústria de produtos de limpeza doméstica (LD) e higiene pessoal (HP), onde está implementada uma estação de tratamento de águas residuais industriais (ETARI). Durante o estudo foi monitorizada e avaliada a biodegradabilidade de algumas substâncias base dos detergentes (tensoactivos aniónicos), nomeadamente o LAS e o SLES, bem como o efluente da indústria nas diferentes fases de tratamento implementado na ETARI. Ao longo do processo de tratamento dos fluxos que compõem as águas residuais industriais, procurou-se estabelecer relações entre parâmetros chave de avaliação do conteúdo em matéria orgânica (Carência Química Oxigénio – CQO e Carência Bioquímica Oxigénio – CBO) com os tensoactivos anteriormente referidos. Os resultados obtidos na caracterização do efluente industrial nas diferentes fases do tratamento implementado mostram eficiências de remoção de CQO na ordem dos 20% no processo de bioxidação, de 78% no processo físico-químico de coagulação/floculação, sendo a eficiência global do processo de tratamento integrado da ordem dos 82%. No que se refere ao teor de tensoactivos aniónicos foram alcançadas eficiências de remoção na ordem dos 16 e 94% respectivamente, para os processos de oxidação e físico-químico, e de 95% quando considerado todo o processo de tratamento implementado.
Resumo:
The late Early Triassic sedimentary-facies evolution and carbonate carbon-isotope marine record (delta(13)C(carb)) of ammonoid-rich, outer platform settings show striking similarities between the South ChinaBlock (SCB) and the widely distant Northern Indian Margin (NIM). The studied sections are located within the Triassic Tethys Himalayan belt (Losar section, Himachal Pradesh, India) and the Nanpanjiang Basin in the South China Block (Jinya section, Guangxi Province), respectively. Carbon isotopes from the studied sections confirm the previously observed carbon cycle perturbations at a time of major paleoceanographic changes in the wake of the end-Permian biotic crisis. This study documents the coincidence between a sharp increase in the carbon isotope composition and the worldwide ammonoid evolutionary turnover (extinction followed by a radiation) occurring around the Smithian-Spathian boundary. Based on recent modeling studies on ammonoid paleobiogeography and taxonomic diversity, we demonstrate that the late Early Triassic (Smithian and Spathian) was a time of a major climate change. More precisely, the end Smithian climate can be characterized by a warm and equable climate underlined by a flat, pole-to-equator, sea surface temperature (SST) gradient, while the steep Spathian SST gradient suggests latitudinally differentiated climatic conditions. Moreover, sedimentary evidence suggests a transition from a humid and hot climate during the Smithian to a dryer climate from the Spathian onwards. By analogy with comparable carbon isotope perturbations in the Late Devonian, Jurassic and Cretaceous we propose that high atmospheric CO(2) levels could have been responsible for the observed carbon cycle disturbance at the Smithian-Spathian boundary. We suggest that the end Smithian ammonoid extinction has been essentially caused by a warm and equable climate related to an increased CO(2) flux possibly originating from a short eruptive event of the Siberian igneous province. This increase in atmospheric CO(2) concentrations could have additionally reduced the marine calcium carbonate oversaturation and weakened the calcification potential of marine organisms, including ammonoids, in late Smithian oceans. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The methodology for generating a homology model of the T1 TCR-PbCS-K(d) class I major histocompatibility complex (MHC) class I complex is presented. The resulting model provides a qualitative explanation of the effect of over 50 different mutations in the region of the complementarity determining region (CDR) loops of the T cell receptor (TCR), the peptide and the MHC's alpha(1)/alpha(2) helices. The peptide is modified by an azido benzoic acid photoreactive group, which is part of the epitope recognized by the TCR. The construction of the model makes use of closely related homologs (the A6 TCR-Tax-HLA A2 complex, the 2C TCR, the 14.3.d TCR Vbeta chain, the 1934.4 TCR Valpha chain, and the H-2 K(b)-ovalbumine peptide), ab initio sampling of CDR loops conformations and experimental data to select from the set of possibilities. The model shows a complex arrangement of the CDR3alpha, CDR1beta, CDR2beta and CDR3beta loops that leads to the highly specific recognition of the photoreactive group. The protocol can be applied systematically to a series of related sequences, permitting the analysis at the structural level of the large TCR repertoire specific for a given peptide-MHC complex.
Resumo:
Synaptic plasticity involves a complex molecular machinery with various protein interactions but it is not yet clear how its components give rise to the different aspects of synaptic plasticity. Here we ask whether it is possible to mathematically model synaptic plasticity by making use of known substances only. We present a model of a multistable biochemical reaction system and use it to simulate the plasticity of synaptic transmission in long-term potentiation (LTP) or long-term depression (LTD) after repeated excitation of the synapse. According to our model, we can distinguish between two phases: first, a "viscosity" phase after the first excitation, the effects of which like the activation of NMDA receptors and CaMKII fade out in the absence of further excitations. Second, a "plasticity" phase actuated by an identical subsequent excitation that follows after a short time interval and causes the temporarily altered concentrations of AMPA subunits in the postsynaptic membrane to be stabilized. We show that positive feedback is the crucial element in the core chemical reaction, i.e. the activation of the short-tail AMPA subunit by NEM-sensitive factor, which allows generating multiple stable equilibria. Three stable equilibria are related to LTP, LTD and a third unfixed state called ACTIVE. Our mathematical approach shows that modeling synaptic multistability is possible by making use of known substances like NMDA and AMPA receptors, NEM-sensitive factor, glutamate, CaMKII and brain-derived neurotrophic factor. Furthermore, we could show that the heteromeric combination of short- and long-tail AMPA receptor subunits fulfills the function of a memory tag.
Resumo:
Four standard radiation qualities (from RQA 3 to RQA 9) were used to compare the imaging performance of a computed radiography (CR) system (general purpose and high resolution phosphor plates of a Kodak CR 9000 system), a selenium-based direct flat panel detector (Kodak Direct View DR 9000), and a conventional screen-film system (Kodak T-MAT L/RA film with a 3M Trimax Regular screen of speed 400) in conventional radiography. Reference exposure levels were chosen according to the manufacturer's recommendations to be representative of clinical practice (exposure index of 1700 for digital systems and a film optical density of 1.4). With the exception of the RQA 3 beam quality, the exposure levels needed to produce a mean digital signal of 1700 were higher than those needed to obtain a mean film optical density of 1.4. In spite of intense developments in the field of digital detectors, screen-film systems are still very efficient detectors for most of the beam qualities used in radiology. An important outcome of this study is the behavior of the detective quantum efficiency of the digital radiography (DR) system as a function of beam energy. The practice of users to increase beam energy when switching from a screen-film system to a CR system, in order to improve the compromise between patient dose and image quality, might not be appropriate when switching from screen-film to selenium-based DR systems.
Resumo:
Audit report on America’s Agricultural Industrial Heritage Landscape, Inc., d/b/a Silos and Smokestacks National Heritage Area (Silos and Smokestacks), in Waterloo, Iowa for the years ended December 31, 2006 and 2005
Resumo:
The forensic two-trace problem is a perplexing inference problem introduced by Evett (J Forensic Sci Soc 27:375-381, 1987). Different possible ways of wording the competing pair of propositions (i.e., one proposition advanced by the prosecution and one proposition advanced by the defence) led to different quantifications of the value of the evidence (Meester and Sjerps in Biometrics 59:727-732, 2003). Here, we re-examine this scenario with the aim of clarifying the interrelationships that exist between the different solutions, and in this way, produce a global vision of the problem. We propose to investigate the different expressions for evaluating the value of the evidence by using a graphical approach, i.e. Bayesian networks, to model the rationale behind each of the proposed solutions and the assumptions made on the unknown parameters in this problem.