963 resultados para Non-linear programming
Resumo:
OBJECTIVES: Different accelerometer cutpoints used by different researchers often yields vastly different estimates of moderate-to-vigorous intensity physical activity (MVPA). This is recognized as cutpoint non-equivalence (CNE), which reduces the ability to accurately compare youth MVPA across studies. The objective of this research is to develop a cutpoint conversion system that standardizes minutes of MVPA for six different sets of published cutpoints. DESIGN: Secondary data analysis. METHODS: Data from the International Children's Accelerometer Database (ICAD; Spring 2014) consisting of 43,112 Actigraph accelerometer data files from 21 worldwide studies (children 3-18 years, 61.5% female) were used to develop prediction equations for six sets of published cutpoints. Linear and non-linear modeling, using a leave one out cross-validation technique, was employed to develop equations to convert MVPA from one set of cutpoints into another. Bland Altman plots illustrate the agreement between actual MVPA and predicted MVPA values. RESULTS: Across the total sample, mean MVPA ranged from 29.7MVPAmind(-1) (Puyau) to 126.1MVPAmind(-1) (Freedson 3 METs). Across conversion equations, median absolute percent error was 12.6% (range: 1.3 to 30.1) and the proportion of variance explained ranged from 66.7% to 99.8%. Mean difference for the best performing prediction equation (VC from EV) was -0.110mind(-1) (limits of agreement (LOA), -2.623 to 2.402). The mean difference for the worst performing prediction equation (FR3 from PY) was 34.76mind(-1) (LOA, -60.392 to 129.910). CONCLUSIONS: For six different sets of published cutpoints, the use of this equating system can assist individuals attempting to synthesize the growing body of literature on Actigraph, accelerometry-derived MVPA.
Resumo:
The least square method is analyzed. The basic aspects of the method are discussed. Emphasis is given in procedures that allow a simple memorization of the basic equations associated with the linear and non linear least square method, polinomial regression and multilinear method.
Resumo:
We describe the preparation and some optical properties of high refractive index TeO2-PbO-TiO2 glass system. Highly homogeneous glasses were obtained by agitating the mixture during the melting process in an alumina crucible. The characterization was done by X-ray diffraction, Raman scattering, light absorption and linear refractive index measurements. The results show a change in the glass structure as the PbO content increases: the TeO4 trigonal bipyramids characteristics of TeO2 glasses transform into TeO3 trigonal pyramids. However, the measured refractive indices are almost independent of the glass composition. We show that third-order nonlinear optical susceptibilities calculated from the measured refractive indices using Lines' theoretical model are also independent of the glass composition.
Resumo:
In this work we describe the synthesis and characterization of chalcogenide glass (0.3La2S3-0.7Ga2S 3) with low phonons frequencies. Several properties were measured like Sellmeier parameters, linear refractive index dispersion and material dispersion. Samples with the composition above were doped with Dy2S3. The absorption and emission characteristics were measured by electronic spectroscopy and fluorescence spectrum respectively. Raman and infrared spectroscopy shows that these glasses present low phonons frequencies and strucuture composed by GaS4 tetrahedrals. The Lines model was used for calculate the coefficients values of the non linear refractive index.
Resumo:
Tässä työssä kehitettiin palo- ja pelastuskäyttöön tarkoitettuun henkilönostimeen teleskooppipuomin profiilit. Profiilien valmistusmateriaalina oli kuumavalssattu, ultraluja säänkestävä rakenneteräs. Työssä kehitettiin standardien ja ohjeiden pohjalta laskentapohja, jolla voidaan tutkia teleskooppipuomin jaksojen tukireaktioita, taivutus- ja vääntömomentteja ja leikkaus ja normaalivoimia. Laskentapohjassa voidaan varioida eri kuormitusten suuntia, teleskooppipuomin sivusuuntaista ulottumaa ja nostokulmaa. Profiilien alustavassa mitoituksessa hyödynnettiin paikallisen lommahduksen huomioon ottavia standardeja ja suunnitteluohjeita. Eri poikkileikkausten ominaisuuksia verrattiin keskenään ja profiili valittiin yhdessä kohdeyrityksen kanssa. Alustavan mitoituksen yhteydessä muodostettiin apuohjelma valitulle poikkileikkaukselle, jolla voitiin tutkia profiilin eri muuttujien vaikutusta mm. paikalliseen lommahdukseen ja jäykkyyteen. Laskentapohjaan sisällytettiin myös optimointirutiini, jolla voitiin minimoida poikkileikkauksen pinta-ala ja tätä kautta profiilin massa. Lopullinen mitoitus suoritettiin elementtimenetelmällä. Mitoituksessa tutkittiin alustavasti mitoitettujen profiilien paikallista lommahdusta lineaarisen stabiilius- ja epälineaarisen analyysin pohjalta. Profiilien jännityksiä tutkittiin tarkemmin mm. varioimalla kuormituksia ja osittelemalla elementtien normaalijännityksiä. Diplomityössä kehitetyllä ja analysoidulla teleskooppipuomilla voitiin keventää jaksojen painoja 15-30 %. Sivusuuntainen ulottuma parani samalla lähes 20 % ja nimelliskuorma kasvoi 25 %.
Resumo:
Global warming mitigation has recently become a priority worldwide. A large body of literature dealing with energy related problems has focused on reducing greenhouse gases emissions at an engineering scale. In contrast, the minimization of climate change at a wider macroeconomic level has so far received much less attention. We investigate here the issue of how to mitigate global warming by performing changes in an economy. To this end, we make use of a systematic tool that combines three methods: linear programming, environmentally extended input output models, and life cycle assessment principles. The problem of identifying key economic sectors that contribute significantly to global warming is posed in mathematical terms as a bi criteria linear program that seeks to optimize simultaneously the total economic output and the total life cycle CO2 emissions. We have applied this approach to the European Union economy, finding that significant reductions in global warming potential can be attained by regulating specific economic sectors. Our tool is intended to aid policymakers in the design of more effective public policies for achieving the environmental and economic targets sought.
Resumo:
One of the main problems in quantitative analysis of complex samples by x-ray fluorescence is related to interelemental (or matrix) effects. These effects appear as a result of interactions among sample elements, affecting the x-ray emission intensity in a non-linear manner. Basically, two main effects occur; intensity absorption and enhancement. The combination of these effects can lead to serious problems. Many studies have been carried out proposing mathematical methods to correct for these effects. Basic concepts and the main correction methods are discussed here.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
La percepción del joven estudiante de economía es que la práctica con ejercicios es lo único que debe saber. Ésta percepción se puede cambiar con la Programación Lineal ya que unimos teoría y práctica y, al mismo tiempo, mejoramos la capacidad de modelar situaciones económicas y además, hacemos énfasis en el uso de las matemáticas como herramienta eficaz en la mejora de las actividades propias.
Resumo:
Wavelength division multiplexing (WDM) networks have been adopted as a near-future solution for the broadband Internet. In previous work we proposed a new architecture, named enhanced grooming (G+), that extends the capabilities of traditional optical routes (lightpaths). In this paper, we compare the operational expenditures incurred by routing a set of demands using lightpaths with that of lighttours. The comparison is done by solving an integer linear programming (ILP) problem based on a path formulation. Results show that, under the assumption of single-hop routing, almost 15% of the operational cost can be reduced with our architecture. In multi-hop routing the operation cost is reduced in 7.1% and at the same time the ratio of operational cost to number of optical-electro-optical conversions is reduced for our architecture. This means that ISPs could provide the same satisfaction in terms of delay to the end-user with a lower investment in the network architecture
Resumo:
In this article, a new technique for grooming low-speed traffic demands into high-speed optical routes is proposed. This enhancement allows a transparent wavelength-routing switch (WRS) to aggregate traffic en route over existing optical routes without incurring expensive optical-electrical-optical (OEO) conversions. This implies that: a) an optical route may be considered as having more than one ingress node (all inline) and, b) traffic demands can partially use optical routes to reach their destination. The proposed optical routes are named "lighttours" since the traffic originating from different sources can be forwarded together in a single optical route, i.e., as taking a "tour" over different sources towards the same destination. The possibility of creating lighttours is the consequence of a novel WRS architecture proposed in this article, named "enhanced grooming" (G+). The ability to groom more traffic in the middle of a lighttour is achieved with the support of a simple optical device named lambda-monitor (previously introduced in the RingO project). In this article, we present the new WRS architecture and its advantages. To compare the advantages of lighttours with respect to classical lightpaths, an integer linear programming (ILP) model is proposed for the well-known multilayer problem: traffic grooming, routing and wavelength assignment The ILP model may be used for several objectives. However, this article focuses on two objectives: maximizing the network throughput, and minimizing the number of optical-electro-optical conversions used. Experiments show that G+ can route all the traffic using only half of the total OEO conversions needed by classical grooming. An heuristic is also proposed, aiming at achieving near optimal results in polynomial time
Resumo:
The ability of biomolecules to catalyze chemical reactions is due chiefly to their sensitivity to variations of the pH in the surrounding environment. The reason for this is that they are made up of chemical groups whose ionization states are modulated by pH changes that are of the order of 0.4 units. The determination of the protonation states of such chemical groups as a function of conformation of the biomolecule and the pH of the environment can be useful in the elucidation of important biological processes from enzymatic catalysis to protein folding and molecular recognition. In the past 15 years, the theory of Poisson-Boltzmann has been successfully used to estimate the pKa of ionizable sites in proteins yielding results, which may differ by 0.1 unit from the experimental values. In this study, we review the theory of Poisson-Boltzmann under the perspective of its application to the calculation of pKa in proteins.
Resumo:
Dynamic mechanical analysis (DMA) is widely used in materials characterization. In this work, we briefly introduce the main concepts related to this technique such as, linear and non-linear viscoelasticity, relaxation time, response of material when it is submitted to a sinusoidal or other periodic stress. Moreover, the main applications of this technique in polymers and polymer blends are also presented. The discussion includes: phase behavior, crystallization; spectrum of relaxation as a function of frequency or temperature; correlation between the material damping and its acoustic and mechanical properties.
Resumo:
It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.