910 resultados para Cosine and Sine Trigonometric Functions
Resumo:
We report a measurement of the top quark mass $M_t$ in the dilepton decay channel $t\bar{t}\to b\ell'^{+}\nu'_\ell\bar{b}\ell^{-}\bar{\nu}_{\ell}$. Events are selected with a neural network which has been directly optimized for statistical precision in top quark mass using neuroevolution, a technique modeled on biological evolution. The top quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb$^{-1}$ of $p\bar{p}$ collisions collected with the CDF II detector, yielding a measurement of $M_t= 171.2\pm 2.7(\textrm{stat.})\pm 2.9(\textrm{syst.})\mathrm{GeV}/c^2$.
Resumo:
The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.
Resumo:
The study examines the personnel training and research activities carried out by the Organization and Methods Division of the Ministry of Finance and their becoming a part and parcel of the state administration in 1943-1971. The study is a combination of institutional and ideological historical research in recent history on adult education, using a constructionist approach. Material salient to the study comes from the files of the Organization and Methods Division in the National Archives, parliamentary documents, committee reports, and the magazines. The concentrated training and research activities arranged by the Organization and Methods Division, became a part and parcel of the state administration in the midst of controversial challenges and opportunities. They served to solve social problems which beset the state administration as well as contextual challenges besetting rationalization measures, and organizational challenges. The activities were also affected by a dependence on decision-makers, administrative units, and civil servants organizations, by different views on rationalization and the holistic nature of reforms, as well as by the formal theories that served as resources. It chose long-term projects which extended to the political decision-makers and administrative units turf, and which were intended to reform the structures of the state administration and to rationalize the practices of the administrative units. The crucial questions emerged in opposite pairs (a constitutional state vs. the ideology of an administratively governed state, a system of national boards vs. a system of government through ministries, efficiency of work vs. pleasantness of work, centralized vs. decentralized rationalization activities) which were not solvable problems but impossible questions with no ultimate answers. The aim and intent of the rationalization of the state administration (the reform of the central, provincial, and local governments) was to facilitate integrated management and to render a greater amount of work by approaching management procedures scientifically and by clarifying administrative instances and their respon-sibilities in regards to each other. The means resorted to were organizational studies and committee work. In the rationalization of office work and finance control, the idea was to effect savings in administrative costs and to pare down those costs as well as to rationalize and heighten those functions by developing the institution of work study practitioners in order to coordinate employer and employee relationships and benefits (the training of work study practitioners, work study, and a two-tier work study practitioner organization). A major part of the training meant teaching and implementing leadership skills in practice, which, in turn, meant that the learning environment was the genuine work community and efforts to change it. In office rationalization, the solution to regulate the relations between the employer and the employees was the co-existence of the technical and biological rationalization and the human resource administration and the accounting and planning systems at the turn of the 1960s and 1970s. The former were based on the school of scientific management and human relations, the latter on system thinking, which was a combination of the former two. In the rationalization of the state administration, efforts were made to find solutions to stabilize management ideologies and to arrange the relationships of administrative systems in administrative science - among other things, in the Hoover Committee and the Simon decision making theory, and, in the 1960s, in system thinking. Despite the development-related vocabulary, the practical work was advanced rationalization. It was said that the practical activities of both the state administration and the administrative units depended on professional managers who saw to production results and human relations. The pedagogic experts hired to develop training came up with a training system, based on the training-technological model where the training was made a function of its own. The State Training Center was established and the training office of the Organization and Methods Division became the leader and coordinator of personnel training.
Resumo:
We report a measurement of the top quark mass $M_t$ in the dilepton decay channel $t\bar{t}\to b\ell'^{+}\nu'_\ell\bar{b}\ell^{-}\bar{\nu}_{\ell}$. Events are selected with a neural network which has been directly optimized for statistical precision in top quark mass using neuroevolution, a technique modeled on biological evolution. The top quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb$^{-1}$ of $p\bar{p}$ collisions collected with the CDF II detector, yielding a measurement of $M_t= 171.2\pm 2.7(\textrm{stat.})\pm 2.9(\textrm{syst.})\mathrm{GeV}/c^2$.
Resumo:
Compression of a rough turned cylinder between two hard, smooth, flat plates has been analysed with the aid of a mathematical model based on statistical analysis. It is assumed that the asperity peak heights follow Gaussian or normal and beta distribution functions and that the loaded asperities comply as though they are completely isolated from the neighbouring ones. Equations have been developed for the loadcompliance relation of the real surface using a simplified relation of the form W0 = K1δn for the load-compliance of a single asperity. Parameters K1 and n have considerable influence on the load-compliance curve and they depend on the material, tip angle of the asperity, standard deviation of the asperity peak height distribution and the density of the asperities.
Resumo:
The neuronal cell adhesion molecule ICAM-5 ICAM-5 (telencephalin) belongs to the intercellular adhesion molecule (ICAM)-subgroup of the immunoglobulin superfamily (IgSF). ICAMs participate in leukocyte adhesion and adhesion-dependent functions in the central nervous system (CNS) through interacting with the leukocyte-specific b2 integrins. ICAM-5 is found in the mammalian forebrain, appears at the time of birth, and is located at the cell soma and neuronal dendrites. Recent studies also show that it is important for the regulation of immune functions in the brain and for the development and maturation of neuronal synapses. The clinical importance of ICAM-5 is still under investigation; it may have a role in the development of Alzheimer s disease (AD). In this study, the role of ICAM-5 in neuronal differentiation and its associations with a-actinin and N-methyl-D-aspartic acid (NMDA) receptors were examined. NMDA receptors (NMDARs) are known to be involved in many neuronal functions, including the passage of information from one neuron to another one, and thus it was thought important to study their role related to ICAM-5. The results suggested that ICAM-5 was able to induce dendritic outgrowth through homophilic adhesion (ICAM-5 monomer binds to another ICAM-5 monomer in the same or neighbouring cell), and the homophilic binding activity appeared to be regulated by monomer/multimer transition. Moreover, ICAM-5 binding to a-actinin was shown to be important for neuritic outgrowth. It was examined whether matrix metalloproteinases (MMPs) are the main enzymes involved in ICAM-5 ectodomain cleavage. The results showed that stimulation of NMDARs leads to MMP activation, cleavage of ICAM-5 and it is accompanied by dendritic spine maturation. These findings also indicated that ICAM-5 and NMDA receptor subunit 1 (NR1) compete for binding to a-actinin, and ICAM-5 may regulate the NR1 association with the actin cytoskeleton. Thus, it is concluded that ICAM-5 is a crucial cell adhesion molecule involved in the development of neuronal synapses, especially in the regulation of dendritic spine development, and its functions may also be involved with memory formation and learning.
Resumo:
For the past decades reflection has been the buzzword of adult and higher education. Reflection is facilitated in many practices and there is abundant research on the issue. Despite the popularity of the concept, the reasons why bringing about reflection in educational practices is difficult remain unclear. The prevailing theories inform of the process in its ideal form. However, to a great extent, they fail to offer conceptual tools for understanding and working with the actualities of reflection. The aim of the doctoral thesis was to explore the challenges and prerequisites of reflection in order to theorize the nature of reflection. By the term reflection it is here referred to becoming aware of and questioning the assumptions that orient our thinking, feelings and actions. The doctoral thesis consists of five studies that approach these questions from different viewpoints and within different contexts. The methods involve both a philosophical and an empirical approach. This multifaceted approach embodies the aim of both gaining a more thorough grasp of the phenomenon and to develop the methodology of researching reflection. The theory building is based on conceptual analysis and rational reconstruction (see Davia 1998; Habermas 1979; Rorty1984) of Mezirow s (1981; 1991; 2000; 2009) theory of transformative learning. In order to explore the aspects which, based on the analysis, appeared insufficiently considered within Mezirow s theory, Damasio s (1994; 1999; 2003; 2010) theory on emotions and consciousness as well as Clausewitz s (1985) view on friction are used as complementary theories. Empirical analyses are used in dialogue with the theoretical, in order to challenge and refine the emerging theorization. Reflection is examined in three different contexts; regarding university teachers pedagogical growth, involuntarily childless women recovering from a life-event crisis, and soldiers preparing to act in chaotic situations of the battlefield as well as recovering from it. The choice of these contexts is based on Mezirow s notion of disorienting dilemma as a trigger for reflection. This notion indicates that reflection may more naturally emerge in association to life-event crises or other cumulative sets of instances, which bring our worldview and beliefs under question. Nevertheless, reflection is often being promoted in educational contexts in which the trigger conditions may not readily prevail. These contextual issues as well as the differences between the facilitated and non-facilitated contexts have not, however, been considered in detail within the research on reflection (or transformative learning). The doctoral thesis offers a new perspective into reflection which, as a further development on Mezirow s transformative learning theory, theorizes the nature of reflection. The developed theory explicates the prerequisites and challenges to reflection. The theory suggests that the challenges of reflection are fundamentally connected to the way the biological life-support system affects our thinking through emotions. While depicting the mechanisms that function as a counterforce to reflection, the developed theory also opens a perspective for considering possibilities for carrying out reflection, and suggests ways to locate and deal with the assumptions to be reflected on. The basic dynamic of the challenges to reflection was explicated by conceptually bridging the gap between Mezirow s and Damasio s theories, through exploring the connections between the meaning perspective and the biological functions of emotions. The concepts of comfort zone and edge-emotions were formed so as to depict the emotional orientation of our thinking, as part of the explanation of the nature of reflection.
Resumo:
Linear optimization model was used to calculate seven wood procurement scenarios for years 1990, 2000 and 2010. Productivity and cost functions for seven cutting, five terrain transport, three long distance transport and various work supervision and scaling methods were calculated from available work study reports. All method's base on Nordic cut to length system. Finland was divided in three parts for description of harvesting conditions. Twenty imaginary wood processing points and their wood procurement areas were created for these areas. The procurement systems, which consist of the harvesting conditions and work productivity functions, were described as a simulation model. In the LP-model the wood procurement system has to fulfil the volume and wood assortment requirements of processing points by minimizing the procurement cost. The model consists of 862 variables and 560 restrictions. Results show that it is economical to increase the mechanical work in harvesting. Cost increment alternatives effect only little on profitability of manual work. The areas of later thinnings and seed tree- and shelter wood cuttings increase on cost of first thinnings. In mechanized work one method, 10-tonne one grip harvester and forwarder, is gaining advantage among other methods. Working hours of forwarder are decreasing opposite to the harvester. There is only little need to increase the number of harvesters and trucks or their drivers from today's level. Quite large fluctuations in level of procurement and cost can be handled by constant number of machines, by alternating the number of season workers and by driving machines in two shifts. It is possible, if some environmental problems of large scale summer time harvesting can be solved.
Resumo:
A boundary layer analysis of mixed convective motion over a hot horizontal flat plate is performed under the conditions of steady flow and low speed. Use of the Howarth-Dorodnytsyn transformation makes it possible to dispense with the usual Boussinesq approximation, and variable gas properties are accounted for via the assumption that dynamic viscosity and thermal conductivity are proportional to the absolute temperature. The formulation presented enables the entire mixed convection regime to be described by a single set of equations. Finite difference solutions when the Prandtl number is 0.72 are obtained over the entire range of the mixed convection parameter ξ from 0 (free convection) to 1 (forced convection) and heating parameter ▵ values from 2 to 12. The effects of both ξ and ▵on the velocity profiles, the temperature profiles, and the variation of skin friction and heat transfer functions are clearly illustrated in tables and graphs.
Resumo:
Goals. Specific language impairment (SLI) has a negative impact on child s speech and language development and interaction. Disorder may be associated with a wide range of comorbid problems. In clinical speech therapy it is important to see the child as a whole so that the rehabilitation can be targeted properly. The aim of this study was to describe the linguistic-cognitive and comorbid symptoms of children with SLI at the age of five, as well as to provide an overwiew of the developmental disorders in the families. The study is part of a larger research project, which will examine paths of development and quality of life of children with SLI as young adults. Methods. The data consisted of patient documents of 100 5-year old children, who were examined in Lastenlinna mainly at 1998. Majority of the subjects were boys, and children s primary diagnosis was either F80.1 or F80.2, which was most common, or both. The diagnosis and the information about the linguistic-cognitive status and comorbid symptoms were collected from reports of medical doctors and experts of other fields, as well as mentions related to familiality. Linguistic-cognitive symptoms were divided into subclasses of speech motor functions, prosessing of language, comprehension of language and use of language. Comorbid symptoms were divided into subclasses of interaction, activity and attention, emotional and behavior problems and neurologic problems. Statistical analyses were based mainly on Pearson s Chi Square test. Results and conclusions. Problems in language processing and speech motor functions were most common of the linguistic-cognitive symptoms. Most of the children had symptoms from two or three symptom classes, and it seemed that girls had more symptoms than boys. Usually children did not have any comorbid symptoms, or had them from one or three symptom classes. Of the comorbid symptoms the most prevalent ones were problems in activity and attention and neurological symptoms, which consisted mostly of motoric and visuomotoric symptoms. The most common of the comorbid diagnoses was F82, specific developmental disorder of motor function. According to literature children with SLI may have problems in mental health, but the results of this study did not confirm that. Children with diagnosis F80.2 had more linguistic-cognitive and comorbid symptoms than children with diagnosis F80.1. The cluster analyses based on all the symtoms revealed four subgroups of the subjects. Of the subjects 85 percent had a positive family history of developmental disorders, and the most prevalent problem in the families was delayed speech development. This study outlined the symptom profile of children with SLI and laid a foundation for the future longitudinal study. The results suggested that there are differences between linguistic-cognitive symptoms of boys and girls, which is important to notice especially when assessing and diagnosing children with SLI.
Resumo:
We propose and develop here a phenomenological Ginzburg-Landau-like theory of cuprate high-temperature superconductivity. The free energy of a cuprate superconductor is expressed as a functional F of the complex spin-singlet pair amplitude psi(ij) equivalent to psi(m) = Delta(m) exp(i phi(m)), where i and j are nearest-neighbor sites of the square planar Cu lattice in which the superconductivity is believed to primarily reside, and m labels the site located at the center of the bond between i and j. The system is modeled as a weakly coupled stack of such planes. We hypothesize a simple form FDelta, phi] = Sigma(m)A Delta(2)(m) + (B/2)Delta(4)(m)] + C Sigma(< mn >) Delta(m) Delta(n) cos(phi(m) - phi(n)) for the functional, where m and n are nearest-neighbor sites on the bond-center lattice. This form is analogous to the original continuum Ginzburg-Landau free-energy functional; the coefficients A, B, and C are determined from comparison with experiments. A combination of analytic approximations, numerical minimization, and Monte Carlo simulations is used to work out a number of consequences of the proposed functional for specific choices of A, B, and C as functions of hole density x and temperature T. There can be a rapid crossover of
Resumo:
We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.
Resumo:
On the basis of Monte Carlo calculations of 2,2-dimethylpropane (neopentane), n-pentane, and 2,2-dimethylbutane (neohexane) at several temperatures, thermodynamic properties and radial distribution functions as well as dimerization and bonding energy distribution functions are reported for both liquid and glassy states. Changes in the radial distribution functions on cooling depend on whether the groups are accessible (peripheral) or inaccessible. Peaks in the radial distribution functions corresponding to peripheral groups do not shift to lower distances on cooling and at times display a large increase in the intensity of the first peak. The peaks due to inaccessible groups, on the other hand, shift to lower distances on cooling. The magnitude of the reorientational contribution in determining the resulting structure of the glass is estimated for the different hydrocarbon molecules investigated. The reorientational contribution is highest for neopentane (26%) followed by isopentane (24%), neohexane (22%), and n-pentane (0%). It appears that molecular geometry has an important role in determining the magnitude of the reorientational contribution to the structure of the glass.
Resumo:
We develop in this article the first actor-critic reinforcement learning algorithm with function approximation for a problem of control under multiple inequality constraints. We consider the infinite horizon discounted cost framework in which both the objective and the constraint functions are suitable expected policy-dependent discounted sums of certain sample path functions. We apply the Lagrange multiplier method to handle the inequality constraints. Our algorithm makes use of multi-timescale stochastic approximation and incorporates a temporal difference (TD) critic and an actor that makes a gradient search in the space of policy parameters using efficient simultaneous perturbation stochastic approximation (SPSA) gradient estimates. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal policy. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Many physical problems can be modeled by scalar, first-order, nonlinear, hyperbolic, partial differential equations (PDEs). The solutions to these PDEs often contain shock and rarefaction waves, where the solution becomes discontinuous or has a discontinuous derivative. One can encounter difficulties using traditional finite difference methods to solve these equations. In this paper, we introduce a numerical method for solving first-order scalar wave equations. The method involves solving ordinary differential equations (ODEs) to advance the solution along the characteristics and to propagate the characteristics in time. Shocks are created when characteristics cross, and the shocks are then propagated by applying analytical jump conditions. New characteristics are inserted in spreading rarefaction fans. New characteristics are also inserted when values on adjacent characteristics lie on opposite sides of an inflection point of a nonconvex flux function, Solutions along characteristics are propagated using a standard fourth-order Runge-Kutta ODE solver. Shocks waves are kept perfectly sharp. In addition, shock locations and velocities are determined without analyzing smeared profiles or taking numerical derivatives. In order to test the numerical method, we study analytically a particular class of nonlinear hyperbolic PDEs, deriving closed form solutions for certain special initial data. We also find bounded, smooth, self-similar solutions using group theoretic methods. The numerical method is validated against these analytical results. In addition, we compare the errors in our method with those using the Lax-Wendroff method for both convex and nonconvex flux functions. Finally, we apply the method to solve a PDE with a convex flux function describing the development of a thin liquid film on a horizontally rotating disk and a PDE with a nonconvex flux function, arising in a problem concerning flow in an underground reservoir.