982 resultados para DETERMINES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis has two major parts. The first part of the thesis will describe a high energy cosmic ray detector -- the High Energy Isotope Spectrometer Telescope (HEIST). HEIST is a large area (0.25 m2sr) balloon-borne isotope spectrometer designed to make high-resolution measurements of isotopes in the element range from neon to nickel (10 ≤ Z ≤ 28) at energies of about 2 GeV/nucleon. The instrument consists of a stack of 12 NaI(Tl) scintilla tors, two Cerenkov counters, and two plastic scintillators. Each of the 2-cm thick NaI disks is viewed by six 1.5-inch photomultipliers whose combined outputs measure the energy deposition in that layer. In addition, the six outputs from each disk are compared to determine the position at which incident nuclei traverse each layer to an accuracy of ~2 mm. The Cerenkov counters, which measure particle velocity, are each viewed by twelve 5-inch photomultipliers using light integration boxes.

HEIST-2 determines the mass of individual nuclei by measuring both the change in the Lorentz factor (Δγ) that results from traversing the NaI stack, and the energy loss (ΔΕ) in the stack. Since the total energy of an isotope is given by Ε = γM, the mass M can be determined by M = ΔΕ/Δγ. The instrument is designed to achieve a typical mass resolution of 0.2 amu.

The second part of this thesis presents an experimental measurement of the isotopic composition of the fragments from the breakup of high energy 40Ar and 56Fe nuclei. Cosmic ray composition studies rely heavily on semi-empirical estimates of the cross-sections for the nuclear fragmentation reactions which alter the composition during propagation through the interstellar medium. Experimentally measured yields of isotopes from the fragmentation of 40Ar and 56Fe are compared with calculated yields based on semi-empirical cross-section formulae. There are two sets of measurements. The first set of measurements, made at the Lawrence Berkeley Laboratory Bevalac using a beam of 287 MeV/nucleon 40Ar incident on a CH2 target, achieves excellent mass resolution (σm ≤ 0.2 amu) for isotopes of Mg through K using a Si(Li) detector telescope. The second set of measurements, also made at the Lawrence Berkeley Laboratory Bevalac, using a beam of 583 MeV/nucleon 56FeFe incident on a CH2 target, resolved Cr, Mn, and Fe fragments with a typical mass resolution of ~ 0.25 amu, through the use of the Heavy Isotope Spectrometer Telescope (HIST) which was later carried into space on ISEE-3 in 1978. The general agreement between calculation and experiment is good, but some significant differences are reported here.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transcription factor p53 is the most commonly altered gene in human cancer. As a redox-active protein in direct contact with DNA, p53 can directly sense oxidative stress through DNA-mediated charge transport. Electron hole transport occurs with a shallow distance dependence over long distances through the π-stacked DNA bases, leading to the oxidation and dissociation of DNA-bound p53. The extent of p53 dissociation depends upon the redox potential of the response element DNA in direct contact with each p53 monomer. The DNA sequence dependence of p53 oxidative dissociation was examined by electrophoretic mobility shift assays using radiolabeled oligonucleotides containing both synthetic and human p53 response elements with an appended anthraquinone photooxidant. Greater p53 dissociation is observed from DNA sequences containing low redox potential purine regions, particularly guanine triplets, within the p53 response element. Using denaturing polyacrylamide gel electrophoresis of irradiated anthraquinone-modified DNA, the DNA damage sites, which correspond to locations of preferred electron hole localization, were determined. The resulting DNA damage preferentially localizes to guanine doublets and triplets within the response element. Oxidative DNA damage is inhibited in the presence of p53, however, only at DNA sites within the response element, and therefore in direct contact with p53. From these data, predictions about the sensitivity of human p53-binding sites to oxidative stress, as well as possible biological implications, have been made. On the basis of our data, the guanine pattern within the purine region of each p53-binding site determines the response of p53 to DNA-mediated oxidation, yielding for some sequences the oxidative dissociation of p53 from a distance and thereby providing another potential role for DNA charge transport chemistry within the cell.

To determine whether the change in p53 response element occupancy observed in vitro also correlates in cellulo, chromatin immunoprecipition (ChIP) and quantitative PCR (qPCR) were used to directly quantify p53 binding to certain response elements in HCT116N cells. The HCT116N cells containing a wild type p53 were treated with the photooxidant [Rh(phi)2bpy]3+, Nutlin-3 to upregulate p53, and subsequently irradiated to induce oxidative genomic stress. To covalently tether p53 interacting with DNA, the cells were fixed with disuccinimidyl glutarate and formaldehyde. The nuclei of the harvested cells were isolated, sonicated, and immunoprecipitated using magnetic beads conjugated with a monoclonal p53 antibody. The purified immounoprecipiated DNA was then quantified via qPCR and genomic sequencing. Overall, the ChIP results were significantly varied over ten experimental trials, but one trend is observed overall: greater variation of p53 occupancy is observed in response elements from which oxidative dissociation would be expected, while significantly less change in p53 occupancy occurs for response elements from which oxidative dissociation would not be anticipated.

The chemical oxidation of transcription factor p53 via DNA CT was also investigated with respect to the protein at the amino acid level. Transcription factor p53 plays a critical role in the cellular response to stress stimuli, which may be modulated through the redox modulation of conserved cysteine residues within the DNA-binding domain. Residues within p53 that enable oxidative dissociation are herein investigated. Of the 8 mutants studied by electrophoretic mobility shift assay (EMSA), only the C275S mutation significantly decreased the protein affinity (KD) for the Gadd45 response element. EMSA assays of p53 oxidative dissociation promoted by photoexcitation of anthraquinone-tethered Gadd45 oligonucleotides were used to determine the influence of p53 mutations on oxidative dissociation; mutation to C275S severely attenuates oxidative dissociation while C277S substantially attenuates dissociation. Differential thiol labeling was used to determine the oxidation states of cysteine residues within p53 after DNA-mediated oxidation. Reduced cysteines were iodoacetamide labeled, while oxidized cysteines participating in disulfide bonds were 13C2D2-iodoacetamide labeled. Intensities of respective iodoacetamide-modified peptide fragments were analyzed using a QTRAP 6500 LC-MS/MS system, quantified with Skyline, and directly compared. A distinct shift in peptide labeling toward 13C2D2-iodoacetamide labeled cysteines is observed in oxidized samples as compared to the respective controls. All of the observable cysteine residues trend toward the heavy label under conditions of DNA CT, indicating the formation of multiple disulfide bonds potentially among the C124, C135, C141, C182, C275, and C277. Based on these data it is proposed that disulfide formation involving C275 is critical for inducing oxidative dissociation of p53 from DNA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oxygenic photosynthesis fundamentally transformed our planet by releasing molecular oxygen and altering major biogeochemical cycles, and this exceptional metabolism relies on a redox-active cubane cluster of four manganese atoms. Not only is manganese essential for producing oxygen, but manganese is also only oxidized by oxygen and oxygen-derived species. Thus the history of manganese oxidation provides a valuable perspective on our planet’s environmental past, the ancient availability of oxygen, and the evolution of oxygenic photosynthesis. Broadly, the general trends of the geologic record of manganese deposition is a chronicle of ancient manganese oxidation: manganese is introduced into the fluid Earth as Mn(II) and it will remain only a trace component in sedimentary rocks until it is oxidized, forming Mn(III,IV) insoluble precipitates that are concentrated in the rock record. Because these manganese oxides are highly favorable electron acceptors, they often undergo reduction in sediments through anaerobic respiration and abiotic reaction pathways.

The following dissertation presents five chapters investigating manganese cycling both by examining ancient examples of manganese enrichments in the geologic record and exploring the mineralogical products of various pathways of manganese oxide reduction that may occur in sediments. The first chapter explores the mineralogical record of manganese and reports abundant manganese reduction recorded in six representative manganese-enriched sedimentary sequences. This is followed by a second chapter that further analyzes the earliest significant manganese deposit 2.4 billon years ago, and determines that it predated the origin of oxygenic photosynthesis and thus is supporting evidence for manganese-oxidizing photosynthesis as an evolutionary precursor prior to oxygenic photosynthesis. The lack of oxygen during this early manganese deposition was partially established using oxygen-sensitive detrital grains, and so a third chapter delves into what these grains mean for oxygen constraints using a mathematical model. The fourth chapter returns to processes affecting manganese post-deposition, and explores the relationships between manganese mineral products and (bio)geochemical reduction processes to understand how various manganese minerals can reveal ancient environmental conditions and biological metabolisms. Finally, a fifth chapter considers whether manganese can be mobilized and enriched in sedimentary rocks and determines that manganese was concentrated secondarily in a 2.5 billion-year-old example from South Africa. Overall, this thesis demonstrates how microbial processes, namely photosynthesis and metal oxide-reducing metabolisms, are linked to and recorded in the rich complexity of the manganese mineralogical record.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O estudo das dimensões psicossociais do trabalho tem aumentado em importância nas últimas décadas, devido ao novo contexto político e econômico mundial de globalização, que determina mudanças no mundo do trabalho e expõe trabalhadores a fatores de risco ocupacional, entre eles o estresse. A categoria profissional do Agente Comunitário de Saúde (ACS), criada no contexto das reformas sanitárias atravessadas pelo Brasil desde a década de 80, tem como um dos principais propósitos atuar na reorganização do sistema de saúde do país. O ACS tem como especificidade e pré-requisitos a necessidade de ser morador da região atendida pela Equipe de Saúde da Família, fato este responsável por um aspecto único dentro do estudo na área de saúde do trabalhador. Nesse cenário o enfermeiro exerce papel de liderança e possui uma característica marcante, que é a manutenção de constante contato com a comunidade, realizando atividades de grande interação com os ACS, devendo evitar ou minimizar fatores estressores e possíveis agravos à saúde no âmbito da Saúde do Trabalhador. O presente estudo tem como objeto o trabalho do ACS como gerador de estresse ocupacional no Programa de Saúde da Família. Tem como objetivo geral discutir o estresse ocupacional na percepção dos ACS no PSF, numa Área Programática do Município do Rio de Janeiro. Trata-se de um estudo descritivo e de abordagem qualitativa. O cenário do estudo foram Unidades de Saúde da Família do Município do Rio de Janeiro, e os sujeitos 32 ACS inseridos em três módulos do PSF. A coleta de dados foi realizada através de entrevistas individuais semi-estruturadas, organizadas e analisadas utilizando a metodologia da Análise de Conteúdo, a partir da qual foram identificadas as seguintes categorias: frustração, trabalho do ACS, representação do trabalho, processo de trabalho, o estresse e relação trabalho x saúde. Os resultados identificam o baixo reconhecimento interferindo na produtividade e na auto-estima, excessiva intensidade e ritmo empregados no trabalho, valorização da burocracia na execução do trabalho, violência como fator de insegurança e reconhece a interferência do estresse na saúde tanto física quanto psíquica. A análise do trabalho do ACS atuante no PSF aponta aspectos que dificultam sua plena atuação, assim como a prática estende-se para além dos conceitos normatizados contidos nas Portarias e outros instrumentos que regulamentam suas atribuições. O trabalho real representa um universo mais complexo e rico do que o trabalho prescrito, que nesse estudo, apresentou-se como fonte geradora de tensão, adoecimento e mal estar, expresso nas vocalizações de queixas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rainfall regime and the karstic nature of the subsoil determines the alternation of a period of flow and a period of drought for a large number of Mediterranean streams. Amongst this type of stream it is possible to distinguish temporary streams, characterised by a period of flow for several months permitting the establishment of the principal groups of aquatic insects; and ephemeral streams whose very brief period of flow permits the establishment of a community reduced to a few species of Diptera. This paper aims to study the structure of the communities which colonise this particular type of stream and the ecology of the principal species which constitute these communities. Four French temporary streams were examined and temperature regimes, dissolved oxygen, calcium and magnesium measured. Samples of fauna were taken regularly and the biotic composition established. The analysis of similarities between the three permanent streams are discussed and compared with permanent streams.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]Este proyecto tiene como objetivo servir de punto de partida al estudio del comportamiento acústico de las chapas perforadas como solución para el revestimiento de fachadas. Para ello se presentan dos posibles modelos de fachada analizados a través del software SoundFlow que determina su coeficiente de absorción. Con el fin de encontrar la solución más adecuada nos centraremos en las siguientes variables: separación de la chapa a la pared (d), diámetro del agujero de las chapas (Ø), y porcentaje de área perforada de la chapa o porosidad (p). Previamente se estudiarán las principales fuentes de contaminación acústica y su espectro de ruido para determinar la frecuencia en la que deben centrarse nuestros esfuerzos por aumentar el coeficiente de absorción.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The three-dimensional coupled wave theory is extended to systematically investigate the diffraction properties of finite-sized anisotropic volume holographic gratings (VHGs) under ultrashort pulsed beam (UPB) readout. The effects of the grating geometrical size and the polarizations of the recording and readout beams on the diffraction properties are presented, in particular under the influence of grating material dispersion. The wavelength selectivity of the finite-sized VHG is analyzed. The wavelength selectivity determines the intensity distributions of the transmitted and diffracted pulsed beams along the output face of the VHG. The distortion and widening of the diffracted pulsed beams are different for different points on the output face, as is numerically shown for a VHG recorded in a LiNbO3 crystal. The beam quality is analyzed, and the variations of the total diffraction efficiency are shown in relation to the geometrical size of the grating and the temporal width of the readout UPB. In addition, the diffraction properties of the finite-sized and one-dimensional VHG for pulsed and continuous-wave readout are compared. The study shows the potential application of VHGs in controlling spatial and temporal features of UPBs simultaneously. (C) 2007 Optical Society of America

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A ipseidade na ética argumentativa de Paul Ricoeur é a referência básica da hermenêutica do si ao qual sempre retorna. Ela estabelece a constante mediação reflexiva em oposição à pretensa posição imediata do sujeito. A mesmidade do si tem como contrapartida o outro. Na comparação, a mesmidade é sinônimo de identidade-idem em oposição à ipseidade-ipse que inclui a alteridade. Esta inclusão questiona a capacidade do si construtivo da ética e, portanto, responsável jurídica e moralmente nas várias injunções do outro. O projeto ético de Ricoeur é compreensível a partir e dentro de sua peculiar metodologia que ele denomina de dialética entre a ética teleológica e a moral deontológica. Esta dialética se fundamenta na tríade do desejo, do dever e da sabedoria prática em recíproca atividade, privilegiando a dimensão teleológica do desejo da vida boa com o outro e para o outro em instituições justas. A ética argumentativa tem a função de dar conteúdo as duas dialéticas pela inclusão do outro no si mesmo sem o qual a reflexão sobre a ipseidade perderia o sentido. A sabedoria prática da ética e do julgamento moral em situação inclui a discussão porque o conflito é insuperável e determina o argumento para o consenso eventual. Nossa tese é a afirmação da capacidade do si mesmo atuar ações construtivas. Além da critica à ideologia e à utopia, Ricoeur fundamenta a dialética entre o princípio-esperança e o princípio de responsabilidade mediante a via utópica do futuro e a via realista da preocupação com o presente diante dos casos inéditos em que a vida e o ecossistema se associam. A imputação pessoal e coletiva desde o passado, no presente para o futuro é devida à responsabilidade. A ipseidade constrói o futuro no presente através de decisões éticas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let F = Ǫ(ζ + ζ –1) be the maximal real subfield of the cyclotomic field Ǫ(ζ) where ζ is a primitive qth root of unity and q is an odd rational prime. The numbers u1=-1, uk=(ζk-k)/(ζ-ζ-1), k=2,…,p, p=(q-1)/2, are units in F and are called the cyclotomic units. In this thesis the sign distribution of the conjugates in F of the cyclotomic units is studied.

Let G(F/Ǫ) denote the Galoi's group of F over Ǫ, and let V denote the units in F. For each σϵ G(F/Ǫ) and μϵV define a mapping sgnσ: V→GF(2) by sgnσ(μ) = 1 iff σ(μ) ˂ 0 and sgnσ(μ) = 0 iff σ(μ) ˃ 0. Let {σ1, ... , σp} be a fixed ordering of G(F/Ǫ). The matrix Mq=(sgnσj(vi) ) , i, j = 1, ... , p is called the matrix of cyclotomic signatures. The rank of this matrix determines the sign distribution of the conjugates of the cyclotomic units. The matrix of cyclotomic signatures is associated with an ideal in the ring GF(2) [x] / (xp+ 1) in such a way that the rank of the matrix equals the GF(2)-dimension of the ideal. It is shown that if p = (q-1)/ 2 is a prime and if 2 is a primitive root mod p, then Mq is non-singular. Also let p be arbitrary, let ℓ be a primitive root mod q and let L = {i | 0 ≤ i ≤ p-1, the least positive residue of defined by ℓi mod q is greater than p}. Let Hq(x) ϵ GF(2)[x] be defined by Hq(x) = g. c. d. ((Σ xi/I ϵ L) (x+1) + 1, xp + 1). It is shown that the rank of Mq equals the difference p - degree Hq(x).

Further results are obtained by using the reciprocity theorem of class field theory. The reciprocity maps for a certain abelian extension of F and for the infinite primes in F are associated with the signs of conjugates. The product formula for the reciprocity maps is used to associate the signs of conjugates with the reciprocity maps at the primes which lie above (2). The case when (2) is a prime in F is studied in detail. Let T denote the group of totally positive units in F. Let U be the group generated by the cyclotomic units. Assume that (2) is a prime in F and that p is odd. Let F(2) denote the completion of F at (2) and let V(2) denote the units in F(2). The following statements are shown to be equivalent. 1) The matrix of cyclotomic signatures is non-singular. 2) U∩T = U2. 3) U∩F2(2) = U2. 4) V(2)/ V(2)2 = ˂v1 V(2)2˃ ʘ…ʘ˂vp V(2)2˃ ʘ ˂3V(2)2˃.

The rank of Mq was computed for 5≤q≤929 and the results appear in tables. On the basis of these results and additional calculations the following conjecture is made: If q and p = (q -1)/ 2 are both primes, then Mq is non-singular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem considered is that of minimizing the drag of a symmetric plate in infinite cavity flow under the constraints of fixed arclength and fixed chord. The flow is assumed to be steady, irrotational, and incompressible. The effects of gravity and viscosity are ignored.

Using complex variables, expressions for the drag, arclength, and chord, are derived in terms of two hodograph variables, Γ (the logarithm of the speed) and β (the flow angle), and two real parameters, a magnification factor and a parameter which determines how much of the plate is a free-streamline.

Two methods are employed for optimization:

(1) The parameter method. Γ and β are expanded in finite orthogonal series of N terms. Optimization is performed with respect to the N coefficients in these series and the magnification and free-streamline parameters. This method is carried out for the case N = 1 and minimum drag profiles and drag coefficients are found for all values of the ratio of arclength to chord.

(2) The variational method. A variational calculus method for minimizing integral functionals of a function and its finite Hilbert transform is introduced, This method is applied to functionals of quadratic form and a necessary condition for the existence of a minimum solution is derived. The variational method is applied to the minimum drag problem and a nonlinear integral equation is derived but not solved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Techniques are developed for estimating activity profiles in fixed bed reactors and catalyst deactivation parameters from operating reactor data. These techniques are applicable, in general, to most industrial catalytic processes. The catalytic reforming of naphthas is taken as a broad example to illustrate the estimation schemes and to signify the physical meaning of the kinetic parameters of the estimation equations. The work is described in two parts. Part I deals with the modeling of kinetic rate expressions and the derivation of the working equations for estimation. Part II concentrates on developing various estimation techniques.

Part I: The reactions used to describe naphtha reforming are dehydrogenation and dehydroisomerization of cycloparaffins; isomerization, dehydrocyclization and hydrocracking of paraffins; and the catalyst deactivation reactions, namely coking on alumina sites and sintering of platinum crystallites. The rate expressions for the above reactions are formulated, and the effects of transport limitations on the overall reaction rates are discussed in the appendices. Moreover, various types of interaction between the metallic and acidic active centers of reforming catalysts are discussed as characterizing the different types of reforming reactions.

Part II: In catalytic reactor operation, the activity distribution along the reactor determines the kinetics of the main reaction and is needed for predicting the effect of changes in the feed state and the operating conditions on the reactor output. In the case of a monofunctional catalyst and of bifunctional catalysts in limiting conditions, the cumulative activity is sufficient for predicting steady reactor output. The estimation of this cumulative activity can be carried out easily from measurements at the reactor exit. For a general bifunctional catalytic system, the detailed activity distribution is needed for describing the reactor operation, and some approximation must be made to obtain practicable estimation schemes. This is accomplished by parametrization techniques using measurements at a few points along the reactor. Such parametrization techniques are illustrated numerically with a simplified model of naphtha reforming.

To determine long term catalyst utilization and regeneration policies, it is necessary to estimate catalyst deactivation parameters from the the current operating data. For a first order deactivation model with a monofunctional catalyst or with a bifunctional catalyst in special limiting circumstances, analytical techniques are presented to transform the partial differential equations to ordinary differential equations which admit more feasible estimation schemes. Numerical examples include the catalytic oxidation of butene to butadiene and a simplified model of naphtha reforming. For a general bifunctional system or in the case of a monofunctional catalyst subject to general power law deactivation, the estimation can only be accomplished approximately. The basic feature of an appropriate estimation scheme involves approximating the activity profile by certain polynomials and then estimating the deactivation parameters from the integrated form of the deactivation equation by regression techniques. Different bifunctional systems must be treated by different estimation algorithms, which are illustrated by several cases of naphtha reforming with different feed or catalyst composition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spurious oscillations are one of the principal issues faced by microwave and RF circuit designers. The rigorous detection of instabilities or the characterization of measured spurious oscillations is still an ongoing challenge. This project aims to create a new stability analysis CAD program that tackles this chal- lenge. Multiple Input Multiple Output (MIMO) pole-zero identification analysis is introduced on the program as a way to create new methods to automate the stability analysis process and to help designers comprehend the obtained results and prevent incorrect interpretations. The MIMO nature of the analysis contributes to eliminate possible controllability and observability losses and helps differentiate mathematical and physical quasi-cancellations, products of overmodeling. The created program reads Single Input Single Output (SISO) or MIMO frequency response data, and determines the corresponding continuous transfer functions with Vector Fitting. Once the transfer function is calculated, the corresponding pole/zero diagram is mapped enabling the designers to analyze the stability of an amplifier. Three data processing methods are introduced, two of which consist of pole/zero elimina- tions and the latter one on determining the critical nodes of an amplifier. The first pole/zero elimination method is based on eliminating non resonant poles, whilst the second method eliminates the poles with small residue by assuming that their effect on the dynamics of a system is small or non-existent. The critical node detection is also based on the residues; the node at which the effect of a pole on the dynamics is highest is defined as the critical node. In order to evaluate and check the efficiency of the created program, it is compared via examples with another existing commercial stability analysis tool (STAN tool). In this report, the newly created tool is proved to be as rigorous as STAN for detecting instabilities. Additionally, it is determined that the MIMO analysis is a very profitable addition to stability analysis, since it helps to eliminate possible problems of loss of controllability, observability and overmodeling.