964 resultados para Weighted sum
Resumo:
In this note, the authors discuss the contribution that frictional sliding of ice floes (or floe aggregates) past each other and pressure ridging make to the plastic yield curve of sea ice. Using results from a previous study that explicitly modeled the amount of sliding and ridging that occurs for a given global strain rate, it is noted that the relative contribution of sliding and ridging to ice stress depends upon ice thickness. The implication is that the shape and size of the plastic yield curve is dependent upon ice thickness. The yield-curve shape dependence is in addition to plastic hardening/weakening that relates the size of the yield curve to ice thickness. In most sea ice dynamics models the yield-curve shape is taken to be independent of ice thickness. The authors show that the change of the yield curve due to a change in the ice thickness can be taken into account by a weighted sum of two thickness-independent rheologies describing ridging and sliding effects separately. It would be straightforward to implement the thickness-dependent yield-curve shape described here into sea ice models used for global or regional ice prediction.
Resumo:
Esta dissertação de mestrado considera a transferência de calor combinando convecção e radiação térmica no escoamento de gases participantes em dutos de seção circular. Partindo de uma metodologia geral, o trabalho enfoca principalmente os casos típicos de aplicação em geradores de vapor fumotubulares de pequeno e médio porte, em que gases em alta temperatura escoam através de um tubo mantido em temperatura uniforme. O escoamento é turbulento e o perfil de velocidade é plenamente desenvolvido desde a entrada do duto. A temperatura do gás, contudo, é uniforme na entrada, considerando-se a região de desenvolvimento térmico. Duas misturas de gases são tratadas, ambas constituídas por dióxido de carbono, vapor d’água e nitrogênio, correspondendo a produtos típicos da combustão estequiométrica de óleo combustível e metano. As propriedades físicas dos gases são admitidas uniformes em todo o duto e calculadas na temperatura de mistura média, enquanto que as propriedades radiantes são modeladas pela soma-ponderada-de-gases-cinzas. O campo de temperatura do gás é obtido a partir da solução da equação bidimensional da conservação da energia, sendo os termos advectivos discretizados através do método de volumes de controle com a função de interpolação Flux-Spline; as trocas de energia radiantes são avaliadas por meio do método das zonas, onde cada zona de radiação corresponde a um volume de controle. Em um primeiro passo, a metodologia é verificada pela comparação com resultados apresentados na literatura para a transferência de calor envolvendo apenas convecção e combinando convecção com radiação. Em seguida, discutem-se alguns efeitos da inclusão da radiação térmica, por exemplo, no número de Nusselt convectivo e na temperatura de mistura do gás. Finalmente, são propostas correlações para o número de Nusselt total, que leva em conta tanto a radiação quanto a convecção. Essa etapa exige inicialmente uma análise dos grupos adimensionais que governam o processo radiante para redução do número elevado de parâmetros independentes. As correlações, aplicáveis a situações encontradas em geradores de vapor fumotubulares de pequeno e médio porte, são validadas estatisticamente pela comparação com os resultados obtidos pela solução numérica.
Resumo:
A comprehensive analysis of electrodisintegration yields of protons on Zr90 is proposed taking into account the giant dipole resonance, isovector giant quadrupole resonance (IVGQR), and quasideuteron contributions to the total photoabsorption cross section from 10 to 140 MeV. The calculation applies the MCMC intranuclear cascade to address the direct and pre-equilibrium emissions and another Monte Carlo-based algorithm to describe the evaporation step. The final results of the total photoabsorption cross section for Zr90 and relevant decay channels are obtained by fitting the (e,p) measurements from the National Bureau of Standards and show that multiple proton emissions dominate the photonuclear reactions at higher energies. These results provide a consistent explanation for the exotic and steady increase of the (e,p) yield and also a strong evidence of a IVGQR with a strength parameter compatible with the E2 energy-weighted sum rule. The inclusive photoneutron cross sections for Zr90 and natZr, derived from these results and normalized with the (e,p) data, are in agreement within 10% with both Livermore and Saclay data up to 140 MeV. © 2007 The American Physical Society.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Engenharia Elétrica - FEB
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Regression coefficients specify the partial effect of a regressor on the dependent variable. Sometimes the bivariate or limited multivariate relationship of that regressor variable with the dependent variable is known from population-level data. We show here that such population- level data can be used to reduce variance and bias about estimates of those regression coefficients from sample survey data. The method of constrained MLE is used to achieve these improvements. Its statistical properties are first described. The method constrains the weighted sum of all the covariate-specific associations (partial effects) of the regressors on the dependent variable to equal the overall association of one or more regressors, where the latter is known exactly from the population data. We refer to those regressors whose bivariate or limited multivariate relationships with the dependent variable are constrained by population data as being ‘‘directly constrained.’’ Our study investigates the improvements in the estimation of directly constrained variables as well as the improvements in the estimation of other regressor variables that may be correlated with the directly constrained variables, and thus ‘‘indirectly constrained’’ by the population data. The example application is to the marital fertility of black versus white women. The difference between white and black women’s rates of marital fertility, available from population-level data, gives the overall association of race with fertility. We show that the constrained MLE technique both provides a far more powerful statistical test of the partial effect of being black and purges the test of a bias that would otherwise distort the estimated magnitude of this effect. We find only trivial reductions, however, in the standard errors of the parameters for indirectly constrained regressors.
Resumo:
The enzymatically catalyzed template-directed extension of ssDNA/primer complex is an impor-tant reaction of extraordinary complexity. The DNA polymerase does not merely facilitate the insertion of dNMP, but it also performs rapid screening of substrates to ensure a high degree of fidelity. Several kinetic studies have determined rate constants and equilibrium constants for the elementary steps that make up the overall pathway. The information is used to develop a macro-scopic kinetic model, using an approach described by Ninio [Ninio J., 1987. Alternative to the steady-state method: derivation of reaction rates from first-passage times and pathway probabili-ties. Proc. Natl. Acad. Sci. U.S.A. 84, 663–667]. The principle idea of the Ninio approach is to track a single template/primer complex over time and to identify the expected behavior. The average time to insert a single nucleotide is a weighted sum of several terms, in-cluding the actual time to insert a nucleotide plus delays due to polymerase detachment from ei-ther the ternary (template-primer-polymerase) or quaternary (+nucleotide) complexes and time delays associated with the identification and ultimate rejection of an incorrect nucleotide from the binding site. The passage times of all events and their probability of occurrence are ex-pressed in terms of the rate constants of the elementary steps of the reaction pathway. The model accounts for variations in the average insertion time with different nucleotides as well as the in-fluence of G+C content of the sequence in the vicinity of the insertion site. Furthermore the model provides estimates of error frequencies. If nucleotide extension is recognized as a compe-tition between successful insertions and time delaying events, it can be described as a binomial process with a probability distribution. The distribution gives the probability to extend a primer/template complex with a certain number of base pairs and in general it maps annealed complexes into extension products.
Resumo:
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
Resumo:
The electric dipole response of neutron-rich nickel isotopes has been investigated using the LAND setup at GSI in Darmstadt (Germany). Relativistic secondary beams of 56−57Ni and 67−72Ni at approximately 500 AMeV have been generated using projectile fragmentation of stable ions on a 4 g/cm2 Be target and subsequent separation in the magnetic dipole fields of the FRagment Separator (FRS). After reaching the LAND setup in Cave C, the radioactive ions were excited electromagnetically in the electric field of a Pb target. The decay products have been measured in inverse kinematics using various detectors. Neutron-rich 67−69Ni isotopes decay by the emission of neutrons, which are detected in the LAND detector. The present analysis concentrates on the (gamma,n) and (gamma,2n) channels in these nuclei, since the proton and three-neutron thresholds are unlikely to be reached considering the virtual photon spectrum for nickel ions at 500 AMeV. A measurement of the stable 58Ni isotope is used as a benchmark to check the accuracy of the present results with previously published data. The measured (gamma,n) and (gamma,np) channels are compared with an inclusive photoneutron measurement by Fultz and coworkers, which are consistent within the respective errors. The measured excitation energy distributions of 67−69Ni contain a large portion of the Giant Dipole Resonance (GDR) strength predicted by the Thomas-Reiche-Kuhn energy-weighted sum rule, as well as a significant amount of low-lying E1 strength, that cannot be attributed to the GDR alone. The GDR distribution parameters are calculated using well-established semi-empirical systematic models, providing the peak energies and widths. The GDR strength is extracted from the chi-square minimization of the model GDR to the measured data of the (gamma,2n) channel, thereby excluding any influence of eventual low-lying strength. The subtraction of the obtained GDR distribution from the total measured E1 strength provides the low-lying E1 strength distribution, which is attributed to the Pygmy Dipole Resonance (PDR). The extraction of the peak energy, width and strength is performed using a Gaussian function. The minimization of trial Gaussian distributions to the data does not converge towards a sharp minimum. Therefore, the results are presented by a chi-square distribution as a function of all three Gaussian parameters. Various predictions of PDR distributions exist, as well as a recent measurement of the 68Ni pygmy dipole-resonance obtained by virtual photon scattering, to which the present pygmy dipole-resonance distribution is also compared.
Resumo:
Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).
Resumo:
In this work, we present a multichannel EEG decomposition model based on an adaptive topographic time-frequency approximation technique. It is an extension of the Matching Pursuit algorithm and called dependency multichannel matching pursuit (DMMP). It takes the physiologically explainable and statistically observable topographic dependencies between the channels into account, namely the spatial smoothness of neighboring electrodes that is implied by the electric leadfield. DMMP decomposes a multichannel signal as a weighted sum of atoms from a given dictionary where the single channels are represented from exactly the same subset of a complete dictionary. The decomposition is illustrated on topographical EEG data during different physiological conditions using a complete Gabor dictionary. Further the extension of the single-channel time-frequency distribution to a multichannel time-frequency distribution is given. This can be used for the visualization of the decomposition structure of multichannel EEG. A clustering procedure applied to the topographies, the vectors of the corresponding contribution of an atom to the signal in each channel produced by DMMP, leads to an extremely sparse topographic decomposition of the EEG.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
The developmental processes and functions of an organism are controlled by the genes and the proteins that are derived from these genes. The identification of key genes and the reconstruction of gene networks can provide a model to help us understand the regulatory mechanisms for the initiation and progression of biological processes or functional abnormalities (e.g. diseases) in living organisms. In this dissertation, I have developed statistical methods to identify the genes and transcription factors (TFs) involved in biological processes, constructed their regulatory networks, and also evaluated some existing association methods to find robust methods for coexpression analyses. Two kinds of data sets were used for this work: genotype data and gene expression microarray data. On the basis of these data sets, this dissertation has two major parts, together forming six chapters. The first part deals with developing association methods for rare variants using genotype data (chapter 4 and 5). The second part deals with developing and/or evaluating statistical methods to identify genes and TFs involved in biological processes, and construction of their regulatory networks using gene expression data (chapter 2, 3, and 6). For the first part, I have developed two methods to find the groupwise association of rare variants with given diseases or traits. The first method is based on kernel machine learning and can be applied to both quantitative as well as qualitative traits. Simulation results showed that the proposed method has improved power over the existing weighted sum method (WS) in most settings. The second method uses multiple phenotypes to select a few top significant genes. It then finds the association of each gene with each phenotype while controlling the population stratification by adjusting the data for ancestry using principal components. This method was applied to GAW 17 data and was able to find several disease risk genes. For the second part, I have worked on three problems. First problem involved evaluation of eight gene association methods. A very comprehensive comparison of these methods with further analysis clearly demonstrates the distinct and common performance of these eight gene association methods. For the second problem, an algorithm named the bottom-up graphical Gaussian model was developed to identify the TFs that regulate pathway genes and reconstruct their hierarchical regulatory networks. This algorithm has produced very significant results and it is the first report to produce such hierarchical networks for these pathways. The third problem dealt with developing another algorithm called the top-down graphical Gaussian model that identifies the network governed by a specific TF. The network produced by the algorithm is proven to be of very high accuracy.