867 resultados para Incremental discretization
Resumo:
Objective: It was the aim of this study to evaluate whether chronic pain in athletes is related to performance, measured by the maximum oxygen consumption and production of hormones and cytokines. Methods: Fifty-five athletes with a mean age of 31.9 +/- 4.2 years engaged in regular competition and showing no symptoms of acute inflammation, particularly fever, were studied. They were divided into 2 subgroups according to the occurrence of pain. Plasma concentrations of adrenaline, noradrenaline, cortisol, prolactin, growth hormone and dopamine were measured by radioimmunoassay, and the production of the cytokines interleukin (IL)-1, IL-2, IL-4, IL-6, tumor necrosis factor-alpha, interferon-alpha and prostaglandin E-2 by whole-blood culture. Maximal oxygen consumption was determined during an incremental treadmill test. Results: There was no change in the concentration of stress hormones, but the athletes with chronic pain showed a reduction in maximum oxygen consumption (22%) and total consumption at the anaerobic threshold (25%), as well as increased cytokine production. Increases of 2.7-, 8.1-, 1.7- and 3.7-fold were observed for IL-1, IL-2, tumor necrosis factor-alpha and interferon-alpha, respectively. Conclusions: Our data show that athletes with chronic pain have enhanced production of proinflammatory cytokines and lipid mediators and reduced performance in the ergospirometric test. Copyright (c) 2008 S. Karger AG, Basel.
Resumo:
Identifying the correct sense of a word in context is crucial for many tasks in natural language processing (machine translation is an example). State-of-the art methods for Word Sense Disambiguation (WSD) build models using hand-crafted features that usually capturing shallow linguistic information. Complex background knowledge, such as semantic relationships, are typically either not used, or used in specialised manner, due to the limitations of the feature-based modelling techniques used. On the other hand, empirical results from the use of Inductive Logic Programming (ILP) systems have repeatedly shown that they can use diverse sources of background knowledge when constructing models. In this paper, we investigate whether this ability of ILP systems could be used to improve the predictive accuracy of models for WSD. Specifically, we examine the use of a general-purpose ILP system as a method to construct a set of features using semantic, syntactic and lexical information. This feature-set is then used by a common modelling technique in the field (a support vector machine) to construct a classifier for predicting the sense of a word. In our investigation we examine one-shot and incremental approaches to feature-set construction applied to monolingual and bilingual WSD tasks. The monolingual tasks use 32 verbs and 85 verbs and nouns (in English) from the SENSEVAL-3 and SemEval-2007 benchmarks; while the bilingual WSD task consists of 7 highly ambiguous verbs in translating from English to Portuguese. The results are encouraging: the ILP-assisted models show substantial improvements over those that simply use shallow features. In addition, incremental feature-set construction appears to identify smaller and better sets of features. Taken together, the results suggest that the use of ILP with diverse sources of background knowledge provide a way for making substantial progress in the field of WSD.
Resumo:
In this paper, the laminar fluid flow of Newtonian and non-Newtonian of aqueous solutions in a tubular membrane is numerically studied. The mathematical formulation, with associated initial and boundary conditions for cylindrical coordinates, comprises the mass conservation, momentum conservation and mass transfer equations. These equations are discretized by using the finite-difference technique on a staggered grid system. Comparisons of the three upwinding schemes for discretization of the non-linear (convective) terms are presented. The effects of several physical parameters on the concentration profile are investigated. The numerical results compare favorably with experimental data and the analytical solutions. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
In this series of papers, we study issues related to the synchronization of two coupled chaotic discrete systems arising from secured communication. The first part deals with uniform dissipativeness with respect to parameter variation via the Liapunov direct method. We obtain uniform estimates of the global attractor for a general discrete nonautonomous system, that yields a uniform invariance principle in the autonomous case. The Liapunov function is allowed to have positive derivative along solutions of the system inside a bounded set, and this reduces substantially the difficulty of constructing a Liapunov function for a given system. In particular, we develop an approach that incorporates the classical Lagrange multiplier into the Liapunov function method to naturally extend those Liapunov functions from continuous dynamical system to their discretizations, so that the corresponding uniform dispativeness results are valid when the step size of the discretization is small. Applications to the discretized Lorenz system and the discretization of a time-periodic chaotic system are given to illustrate the general results. We also show how to obtain uniform estimation of attractors for parametrized linear stable systems with nonlinear perturbation.
Resumo:
We consider incompressible Stokes flow with an internal interface at which the pressure is discontinuous, as happens for example in problems involving surface tension. We assume that the mesh does not follow the interface, which makes classical interpolation spaces to yield suboptimal convergence rates (typically, the interpolation error in the L(2)(Omega)-norm is of order h(1/2)). We propose a modification of the P(1)-conforming space that accommodates discontinuities at the interface without introducing additional degrees of freedom or modifying the sparsity pattern of the linear system. The unknowns are the pressure values at the vertices of the mesh and the basis functions are computed locally at each element, so that the implementation of the proposed space into existing codes is straightforward. With this modification, numerical tests show that the interpolation order improves to O(h(3/2)). The new pressure space is implemented for the stable P(1)(+)/P(1) mini-element discretization, and for the stabilized equal-order P(1)/P(1) discretization. Assessment is carried out for Poiseuille flow with a forcing surface and for a static bubble. In all cases the proposed pressure space leads to improved convergence orders and to more accurate results than the standard P(1) space. In addition, two Navier-Stokes simulations with moving interfaces (Rayleigh-Taylor instability and merging bubbles) are reported to show that the proposed space is robust enough to carry out realistic simulations. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
We present an efficient numerical methodology for the 31) computation of incompressible multi-phase flows described by conservative phase-field models We focus here on the case of density matched fluids with different viscosity (Model H) The numerical method employs adaptive mesh refinements (AMR) in concert with an efficient semi-implicit time discretization strategy and a linear, multi-level multigrid to relax high order stability constraints and to capture the flow`s disparate scales at optimal cost. Only five linear solvers are needed per time-step. Moreover, all the adaptive methodology is constructed from scratch to allow a systematic investigation of the key aspects of AMR in a conservative, phase-field setting. We validate the method and demonstrate its capabilities and efficacy with important examples of drop deformation, Kelvin-Helmholtz instability, and flow-induced drop coalescence (C) 2010 Elsevier Inc. All rights reserved
Resumo:
Energy efficiency and renewable energy use are two main priorities leading to industrial sustainability nowadays according to European Steel Technology Platform (ESTP). Modernization efforts can be done by industries to improve energy consumptions of the production lines. These days, steel making industrial applications are energy and emission intensive. It was estimated that over the past years, energy consumption and corresponding CO2 generation has increased steadily reaching approximately 338.15 parts per million in august 2010 [1]. These kinds of facts and statistics have introduced a lot of room for improvement in energy efficiency for industrial applications through modernization and use of renewable energy sources such as solar Photovoltaic Systems (PV).The purpose of this thesis work is to make a preliminary design and simulation of the solar photovoltaic system which would attempt to cover the energy demand of the initial part of the pickling line hydraulic system at the SSAB steel plant. For this purpose, the energy consumptions of this hydraulic system would be studied and evaluated and a general analysis of the hydraulic and control components performance would be done which would yield a proper set of guidelines contributing towards future energy savings. The results of the energy efficiency analysis showed that the initial part of the pickling line hydraulic system worked with a low efficiency of 3.3%. Results of general analysis showed that hydraulic accumulators of 650 liter size should be used by the initial part pickling line system in combination with a one pump delivery of 100 l/min. Based on this, one PV system can deliver energy to an AC motor-pump set covering 17.6% of total energy and another PV system can supply a DC hydraulic pump substituting 26.7% of the demand. The first system used 290 m2 area of the roof and was sized as 40 kWp, the second used 109 m2 and was sized as 15.2 kWp. It was concluded that the reason for the low efficiency was the oversized design of the system. Incremental modernization efforts could help to improve the hydraulic system energy efficiency and make the design of the solar photovoltaic system realistically possible. Two types of PV systems where analyzed in the thesis work. A method was found calculating the load simulation sequence based on the energy efficiency studies to help in the PV system simulations. Hydraulic accumulators integrated into the pickling line worked as energy storage when being charged by the PV system as well.
Resumo:
The main objective of the thesis “Conceptual Product Development in Small Corporations” is by the use of a case study test the MFD™-method (Erixon G. , 1998) combined with PMM in a product development project. (Henceforth called MFD™/PMM-method). The MFD™/PMM-method used for documenting and controlling a product development project has since it was introduced been used in several industries and projects. The method has been proved to be a good way of working with the early stages of product development, however, there are almost only projects carried out on large industries which means that there are very few references to how the MFD™/PMM-method works in a small corporation. Therefore, was the case study in the thesis “Conceptual Product Development in Small Corporations” carried out in a small corporation to find out whether the MFD™/PMM-method also can be applied and used in such a corporation.The PMM was proposed in a paper presented at Delft University of Technology in Holland 1998 by the author and Gunnar Erixon. (See appended paper C: The chart of modular function deployment.) The title “The chart of modular function deployment” was later renamed as PMM, Product Management Map. (Sweden PreCAD AB, 2000). The PMM consists of a QFD-matrix linked to MIM (Module Indication Matrix) via a coupling matrix which makes it possible to make an unbroken chain from the customer domain to the designed product/modules. The PMM makes it easy to correct omissions made in creating new products and modules.In the thesis “Conceptual Product Development in Small Corporations” the universal MFD™/PMM-method has been adapted by the author to three models of product development; original-, evolutionary- and incremental development.The evolutionary adapted MFD™/PMM-method was tested as a case study at Atlings AB in the community Ockelbo. Atlings AB is a small corporation with a total number of 50 employees and an annual turnover of 9 million €. The product studied at the corporation was a steady rest for supporting long shafts in turning. The project team consisted of management director, a sales promoter, a production engineer, a design engineer and a workshop technician, the author as team leader and a colleague from Dalarna University as discussion partner. The project team has had six meetings.The project team managed to use MFD™ and to make a complete PMM of the studied product. There were no real problems occurring in the project work, on the contrary the team members worked very well in the group, having ideas how to improve the product. Instead, the challenge for a small company is how to work with the MFD™/PMM-method in the long run! If the MFD™/PMM-method is to be a useful tool for the company it needs to be used continuously and that requires financial and personnel resources. One way for the company to overcome the probable lack of recourses regarding capital and personnel is to establish a good cooperation with a regional university or a development centre.
Resumo:
The Sustainability revolution: A societal paradigm shift – ethos, innovation, governance transformation This paper identifies several key mechanisms that underlie major paradigm shifts. After identifying four such mechanisms, the article focuses on one type of transformation which has a prominent place in the sustainability revolution that the article argues is now taking place. The transformation is piecemeal, incremental, diffuse – in earlier writings referred to as ”organic”. This is a more encompassing notion than grassroots, since the innovation and transformation processes may be launched and developed at multiple levels through diverse mechanisms of discovery and development. Major features of the sustainability revolution are identified and comparisons made to the industrial revolution.
Resumo:
The aim of this study was 1) to validate the 0.5 body-mass exponent for maximal oxygen uptake (V. O2max) as the optimal predictor of performance in a 15 km classical-technique skiing competition among elite male cross-country skiers and 2) to evaluate the influence of distance covered on the body-mass exponent for V. O2max among elite male skiers. Twenty-four elite male skiers (age: 21.4±3.3 years [mean ± standard deviation]) completed an incremental treadmill roller-skiing test to determine their V. O2max. Performance data were collected from a 15 km classicaltechnique cross-country skiing competition performed on a 5 km course. Power-function modeling (ie, an allometric scaling approach) was used to establish the optimal body-mass exponent for V . O2max to predict the skiing performance. The optimal power-function models were found to be race speed = 8.83⋅(V . O2max m-0.53) 0.66 and lap speed = 5.89⋅(V . O2max m-(0.49+0.018lap)) 0.43e0.010age, which explained 69% and 81% of the variance in skiing speed, respectively. All the variables contributed to the models. Based on the validation results, it may be recommended that V. O2max divided by the square root of body mass (mL⋅min−1 ⋅kg−0.5) should be used when elite male skiers’ performance capability in 15 km classical-technique races is evaluated. Moreover, the body-mass exponent for V . O2max was demonstrated to be influenced by the distance covered, indicating that heavier skiers have a more pronounced positive pacing profile (ie, race speed gradually decreasing throughout the race) compared to that of lighter skiers.
Resumo:
This licentiate thesis sets out to analyse how a retail price decision frame can be understood. It is argued that it is possible to view price determination within retailing by determining the level of rationality and using behavioural theories. In this way, it is possible to use assumptions derived from economics and marketing to establish a decision frame. By taking a management perspective, it is possible to take into consideration how it is assumed that the retailer should strategically manage price decisions, which decisions might be assumed to be price decisions, and which decisions can be assumed to be under the control of the retailer. Theoretically, this licentiate thesis has its foundations in different assumptions about decision frames regarding the level of information collected, the goal of the decisions, and the outcomes of the decisions. Since the concepts that are to be analysed within this thesis are price decisions, the latter part of the theory discusses price decision in specific: sequential price decisions, at the point of the decision, and trade-offs when making a decision. Here, it is evident that a conceptual decision frame that is intended to illustrate price decisions includes several aspects: several decision alternatives and what assumptions of rationality that can be made in relation to the decision frame. A semi-structured literature review was conducted. As a result, it became apparent that two important things in the decision frame were unclear: time assumptions regarding the decisions and the amount of information that is assumed in relation to the different decision alternatives. By using the same articles that were used to adjust the decision frame, a topical study was made in order to determine the time specific assumptions, as well as the analytical level based on the assumed information necessary for individual decision alternatives. This, together with an experimental study, was necessary to be able to discuss the consequences of the rationality assumption. When the retail literature is analysed for the level of rationality and consequences of assuming certain assumptions of rationality, three main things becomes apparent. First, the level of rationality or the assumptions of rationality are seldom made or accounted for in the literature. In fact, there are indications that perfect and bounded rationality assumptions are used simultaneously within studies. Second, although bounded rationality is a recognised theoretical perspective, very few articles seem to use these assumptions. Third, since the outcome of a price decision seems to provide no incremental sale, it is questionable which assumptions of rationality that should be used. It might even be the case that no assumptions of rationality at all should be used. In a broader perspective, the findings from this licentiate thesis show that the assumptions of rationality within retail research is unclear. There is an imbalance between the perspectives used, where the main assumptions seem to be concentrated to perfect rationality. However, it is suggested that by clarifying which assumptions of rationality that is used and using bounded rationality assumptions within research would result in a clearer picture of the multifaceted price decisions that could be assumed within retailing. The theoretical contribution of this thesis mainly surround the identification of how the level of rationality provides limiting assumptions within retail research. Furthermore, since indications show that learning might not occur within this specific context it is questioned whether the basic learning assumption within bounded rationality should be used in this context.