990 resultados para deformed exponential function
Resumo:
An ultra high vacuum system capable of attaining pressures of 10-12 mm Hg was used for thermal desorption experiments. The metal chosen for these experiments was tantalum because of its suitability for thermal desorption experiments and because relatively little work has been done using this metal. The gases investigated were carbon monoxide, hydrogen and ethylene. The kinetic and thermodynamic parameters relating to the desorption reaction were calculated and the values obtained related to the reaction on the surface. The thermal desorption reaction was not capable of supplying all the information necessary to form a complete picture of the desorption reaction. Further information was obtained by using a quadrupole mass spectrometer to analyse the desorbed species. The identification of the desorbed species combined with the value of the desorption parameters meant that possible adatom structures could be postulated. A combination of these two techniques proved to be a very powerful tool when investigating gas-metal surface reactions and gave realistic values for the measured parameters such as the surface coverage, order of reaction, the activation energy and pre-exponential function for desorption. Electron microscopy and X-ray diffraction were also used to investigate the effect of the gases on the metal surface.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
Mathematics Subject Classification: 74D05, 26A33
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
The northern Antarctic Peninsula is one of the fastest changing regions on Earth. The disintegration of the Larsen-A Ice Shelf in 1995 caused tributary glaciers to adjust by speeding up, surface lowering, and overall increased ice-mass discharge. In this study, we investigate the temporal variation of these changes at the Dinsmoor-Bombardier-Edgeworth glacier system by analyzing dense time series from various spaceborne and airborne Earth observation missions. Precollapse ice shelf conditions and subsequent adjustments through 2014 were covered. Our results show a response of the glacier system some months after the breakup, reaching maximum surface velocities at the glacier front of up to 8.8 m/d in 1999 and a subsequent decrease to ~1.5 m/d in 2014. Using a dense time series of interferometrically derived TanDEM-X digital elevation models and photogrammetric data, an exponential function was fitted for the decrease in surface elevation. Elevation changes in areas below 1000 m a.s.l. amounted to at least 130±15 m130±15 m between 1995 and 2014, with change rates of ~3.15 m/a between 2003 and 2008. Current change rates (2010-2014) are in the range of 1.7 m/a. Mass imbalances were computed with different scenarios of boundary conditions. The most plausible results amount to -40.7±3.9 Gt-40.7±3.9 Gt. The contribution to sea level rise was estimated to be 18.8±1.8 Gt18.8±1.8 Gt, corresponding to a 0.052±0.005 mm0.052±0.005 mm sea level equivalent, for the period 1995-2014. Our analysis and scenario considerations revealed that major uncertainties still exist due to insufficiently accurate ice-thickness information. The second largest uncertainty in the computations was the glacier surface mass balance, which is still poorly known. Our time series analysis facilitates an improved comparison with GRACE data and as input to modeling of glacio-isostatic uplift in this region. The study contributed to a better understanding of how glacier systems adjust to ice shelf disintegration.
Resumo:
Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.
Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.
Resumo:
The print substrate influences the print result in dry toner electrophotography, which is a widely used digital printing method. The influence of the substrate can be seen more easily in color printing, as that is a more complex process compared to monochrome printing. However, the print quality is also affected by the print substrate in grayscale printing. It is thus in the interests of both substrate producers and printing equipment manufacturers to understand the substrate properties that influence the quality of printed images in more detail. In dry toner electrophotography, the image is printed by transferring charged toner particles to the print substrate in the toner transfer nip, utilizing an electric field, in addition to the forces linked to the contact between toner particles and substrate in the nip. The toner transfer and the resulting image quality are thus influenced by the surface texture and the electrical and dielectric properties of the print substrate. In the investigation of the electrical and dielectric properties of the papers and the effects of substrate roughness, in addition to commercial papers, controlled sample sets were made on pilot paper machines and coating machines to exclude uncontrolled variables from the experiments. The electrical and dielectric properties of the papers investigated were electrical resistivity and conductivity, charge acceptance, charge decay, and the dielectric permittivity and losses at different frequencies, including the effect of temperature. The objective was to gain an understanding of how the electrical and dielectric properties are affected by normal variables in papermaking, including basis weight, material density, filler content, ion and moisture contents, and coating. In addition, the dependency of substrate resistivity on the electric field applied was investigated. Local discharging did not inhibit transfer with the paper roughness levels that are normal in electrophotographic color printing. The potential decay of paper revealed that the charge decay cannot be accurately described with a single exponential function, since in charge decay there are overlapping mechanisms of conduction and depolarization of paper. The resistivity of the paper depends on the NaCl content and exponentially on moisture content although it is also strongly dependent on the electric field applied. This dependency is influenced by the thickness, density, and filler contents of the paper. Furthermore, the Poole-Frenkel model can be applied to the resistivity of uncoated paper. The real part of the dielectric constant ε’ increases with NaCl content and relative humidity, but when these materials cannot polarize freely, the increase cannot be explained by summing the effects of their dielectric constants. Dependencies between the dielectric constant and dielectric loss factor and NaCl content, temperature, and frequency show that in the presence of a sufficient amount of moisture and NaCl, new structures with a relaxation time of the order of 10-3 s are formed in paper. The ε’ of coated papers is influenced by the addition of pigments and other coating additives with polarizable groups and due to the increase in density. The charging potential decreases and the electrical conductivity, potential decay rate, and dielectric constant of paper increase with increasing temperature. The dependencies are exponential and the temperature dependencies and their activation energies are altered by the ion content. The results have been utilized in manufacturing substrates for electrophotographic color printing.
Resumo:
This essay presents a proposal on methodology over the mathematical object Exponential Function which enables the development of interpretative and creative skills with potential meaning to the students starting from a didactic sequence structured on the light of The Theory of Didactic Situations from Guy Brousseau and, from the Records of Semiotic Representation of Duval, providing interactions among the students, the teacher and the environment of cooperative learning where the students feel free to express their own ideas as well as to suggest their own approaches. The methodology presented has been developed according to the students first knowledge, valuing their different ways of registering, which have such an important role during the teaching and learning processes. The proposal has been applied to the students from the first year of high school of Colégio Estadual José de Anchieta Ensino Fundamental e Médio, located in a town called Dois Vizinhos –Paraná. In order to the development of the research the methodological tool Didactic Engineering Artigue which consists in a methodology developed only to the research with didactic situations. The main goal has been reached at first, which was to work on the conceptual part of the Exponential Function, the relation of dependence and its main characteristic so that the variable part is in the exponent. Moreover with no imposition but starting from suitable didactic situations, the students were able to realize that they could solve the problems which involve the exponential function and furthermore create new problems (according to their universe) modeled by this kind of function. Its believed that the methodology based on the theory of didactic situations, analysis of students registers, observation on mistakes and obstacles as well as reflections over the aspects of the didactic contract are of fundamental importance to the teaching practice and determinant during the teaching-learning process.
Resumo:
Foraminiferal data were obtained from 66 samples of box cores on the southeastern Brazilian upper margin (between 23.8A degrees-25.9A degrees S and 42.8A degrees-46.13A degrees W) to evaluate the benthic foraminiferal fauna distribution and its relation to some selected abiotic parameters. We focused on areas with different primary production regimes on the southern Brazilian margin, which is generally considered as an oligotrophic region. The total density (D), richness (R), mean diversity (H) over bar', average living depth (ALD(X) ) and percentages of specimens of different microhabitats (epifauna, shallow infauna, intermediate infauna and deep infauna) were analyzed. The dominant species identified were Uvigerina spp., Globocassidulina subglobosa, Bulimina marginata, Adercotryma wrighti, Islandiella norcrossi, Rhizammina spp. and Brizalina sp.. We also established a set of mathematical functions for analyzing the vertical foraminiferal distribution patterns, providing a quantitative tool that allows correlating the microfaunal density distributions with abiotic factors. In general, the cores that fit with pure exponential decaying functions were related to the oligotrophic conditions prevalent on the Brazilian margin and to the flow of the Brazilian Current (BC). Different foraminiferal responses were identified in cores located in higher productivity zones, such as the northern and the southern region of the study area, where high percentages of infauna were encountered in these cores, and the functions used to fit these profiles differ appreciably from a pure exponential function, as a response of the significant living fauna in deeper layers of the sediment. One of the main factors supporting the different foraminiferal assemblage responses may be related to the differences in primary productivity of the water column and, consequently, in the estimated carbon flux to the sea floor. Nevertheless, also bottom water velocities, substrate type and water depth need to be considered.
Resumo:
We present measurements of J/psi yields in d + Au collisions at root S(NN) = 200 GeV recorded by the PHENIX experiment and compare them with yields in p + p collisions at the same energy per nucleon-nucleon collision. The measurements cover a large kinematic range in J/psi rapidity (-2.2 < y < 2.4) with high statistical precision and are compared with two theoretical models: one with nuclear shadowing combined with final state breakup and one with coherent gluon saturation effects. In order to remove model dependent systematic uncertainties we also compare the data to a simple geometric model. The forward rapidity data are inconsistent with nuclear modifications that are linear or exponential in the density weighted longitudinal thickness, such as those from the final state breakup of the bound state.
Resumo:
This paper presents the results of the in-depth study of the Barkhausen effect signal properties for the plastically deformed Fe-2%Si samples. The investigated samples have been deformed by cold rolling up to plastic strain epsilon(p) = 8%. The first approach consisted of time-domain-resolved pulse and frequency analysis of the Barkhausen noise signals whereas the complementary study consisted of the time-resolved pulse count analysis as well as a total pulse count. The latter included determination of time distribution of pulses for different threshold voltage levels as well as the total pulse count as a function of both the amplitude and the duration time of the pulses. The obtained results suggest that the observed increase in the Barkhausen noise signal intensity as a function of deformation level is mainly due to the increase in the number of bigger pulses.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Differences between the respiratory chain of the fungus Paracoccidioides brasiliensis and its mammalian host are reported. Respiration, membrane potential, and oxidative phosphorylation in mitochondria from P. brasiliensis spheroplasts were evaluated in situ, and the presence of a complete (Complex I-V) functional respiratory chain was demonstrated. In succinate-energized mitochondria, ADP induced a transition from resting to phosphorylating respiration. The presence of an alternative NADH-ubiquinone oxidoreductase was indicated by: (i) the ability to oxidize exogenous NADH and (ii) the lack of sensitivity to rotenone and presence of sensitivity to flavone. Malate/NAD(+)-supported respiration suggested the presence of either a mitochondrial pyridine transporter or a glyoxylate pathway contributing to NADH and/or succinate production. Partial sensitivity of NADH/succinate-supported respiration to antimycin A and cyanide, as well as sensitivity to benzohydroxamic acids, suggested the presence of an alternative oxidase in the yeast form of the fungus. An increase in activity and gene expression of the alternative NADH dehydrogenase throughout the yeast`s exponential growth phase was observed. This increase was coupled with a decrease in Complex I activity and gene expression of its subunit 6. These results support the existence of alternative respiratory chain pathways in addition to Complex I, as well as the utilization of NADH-linked substrates by P. brasiliensis. These specific components of the respiratory chain could be useful for further research and development of pharmacological agents against the fungus.
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
A hierarchical matrix is an efficient data-sparse representation of a matrix, especially useful for large dimensional problems. It consists of low-rank subblocks leading to low memory requirements as well as inexpensive computational costs. In this work, we discuss the use of the hierarchical matrix technique in the numerical solution of a large scale eigenvalue problem arising from a finite rank discretization of an integral operator. The operator is of convolution type, it is defined through the first exponential-integral function and, hence, it is weakly singular. We develop analytical expressions for the approximate degenerate kernels and deduce error upper bounds for these approximations. Some computational results illustrating the efficiency and robustness of the approach are presented.