935 resultados para General obesity
Resumo:
In this paper, we re-examine the relationship between overweight and labour market success, using indicators of individual body composition along with BMI (Body Mass Index). We use the dataset from Finland in which weight, height, fat mass and waist circumference are not self-reported, but obtained as part of the overall health examination. We find that waist circumference, but not weight or fat mass, has a negative effect on wages for women, whereas all measures of obesity have negative effects on women’s employment probabilities. For men, the only obesity measure that is significant for men’s employment probabilities is fat mass. One interpretation of our findings is that the negative wage effects of overweight on wages run through the discrimination channel, but that the negative effects of overweight on employment have more to do with ill health. All in all, measures of body composition provide a more refined picture about the effects of obesity on wages and employment.
Resumo:
The Shannon cipher system is studied in the context of general sources using a notion of computational secrecy introduced by Merhav & Arikan. Bounds are derived on limiting exponents of guessing moments for general sources. The bounds are shown to be tight for iid, Markov, and unifilar sources, thus recovering some known results. A close relationship between error exponents and correct decoding exponents formfixed rate source compression on the one hand and exponents for guessing moments on the other hand is established.
Resumo:
In this article, a general definition of the process average temperature has been developed, and the impact of the various dissipative mechanisms on 1/COP of the chiller evaluated. The present component-by-component black box analysis removes the assumptions regarding the generator outlet temperature(s) and the component effective thermal conductances. Mass transfer resistance is also incorporated into the absorber analysis to arrive at a more realistic upper limit to the cooling capacity. Finally, the theoretical foundation for the absorption chiller T-s diagram is derived. This diagrammatic approach only requires the inlet and outlet conditions of the chiller components and can be employed as a practical tool for system analysis and comparison. (C) 2000 Elsevier Science Ltd and IIR. All rights reserved.
Resumo:
A general method for generation of base-pairs in a curved DNA structure, for any prescribed values of helical parameters--unit rise (h), unit twist (theta), wedge roll (theta R) and wedge tilt (theta T), propeller twist (theta p) and displacement (D) is described. Its application for generation of uniform as well curved structures is also illustrated with some representative examples. An interesting relationship is observed between helical twist (theta), base-pair parameters theta x, theta y and the wedge parameters theta R, theta T, which has important consequences for the description and estimation of DNA curvature.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
Vuorokausivirtaaman ennustaminen yhdyskuntien vesi- ja viemärilaitosten yleissuunnittelussa.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
Stability results are given for a class of feedback systems arising from the regulation of time-varying discrete-time systems using optimal infinite-horizon and moving-horizon feedback laws. The class is characterized by joint constraints on the state and the control, a general nonlinear cost function and nonlinear equations of motion possessing two special properties. It is shown that weak conditions on the cost function and the constraints are sufficient to guarantee uniform asymptotic stability of both the optimal infinite-horizon and movinghorizon feedback systems. The infinite-horizon cost associated with the moving-horizon feedback law approaches the optimal infinite-horizon cost as the moving horizon is extended.
Resumo:
The concept of short range strong spin-two (f) field (mediated by massive f-mesons) and interacting directly with hadrons was introduced along with the infinite range (g) field in early seventies. In the present review of this growing area (often referred to as strong gravity) we give a general relativistic treatment in terms of Einstein-type (non-abelian gauge) field equations with a coupling constant Gf reverse similar, equals 1038 GN (GN being the Newtonian constant) and a cosmological term λf ƒ;μν (ƒ;μν is strong gravity metric and λf not, vert, similar 1028 cm− is related to the f-meson mass). The solutions of field equations linearized over de Sitter (uniformly curves) background are capable of having connections with internal symmetries of hadrons and yielding mass formulae of SU(3) or SU(6) type. The hadrons emerge as de Sitter “microuniverses” intensely curved within (radius of curvature not, vert, similar10−14 cm).The study of spinor fields in the context of strong gravity has led to Heisenberg's non-linear spinor equation with a fundamental length not, vert, similar2 × 10−14 cm. Furthermore, one finds repulsive spin-spin interaction when two identical spin-Image particles are in parallel configuration and a connection between weak interaction and strong gravity.Various other consequences of strong gravity embrace black hole (solitonic) solutions representing hadronic bags with possible quark confinement, Regge-like relations between spins and masses, connection with monopoles and dyons, quantum geons and friedmons, hadronic temperature, prevention of gravitational singularities, providing a physical basis for Dirac's two metric and large numbers hypothesis and projected unification with other basic interactions through extended supergravity.
Resumo:
Side chain bromination of aromatic amidomethylated compounds yields aldehydes.
Resumo:
AIMS An independent, powerful coronary heart disease (CHD) predictor is a low level of high-density lipoprotein cholesterol (HDL-C). Discoidal preβ-HDL particles and large HDL2 particles are the primary cholesterol acceptors in reverse cholesterol transport, a key anti-atherogenic HDL mechanism. The quality of HDL subspecies may provide better markers of HDL functionality than does HDL-C alone. We aimed I) to study whether alterations in the HDL subspecies profile exist in low-HDL-C subjects II) to explore the relationship of any changes in HDL subspecies profile in relation to atherosclerosis and metabolic syndrome; III) to elucidate the impact of genetics and acquired obesity on HDL subspecies distribution. SUBJECTS The study consisted of 3 cohorts: A) Finnish families with low HDL-C and premature CHD (Study I: 67 subjects with familial low HDL-C and 64 controls; Study II: 83 subjects with familial low HDL-C, 65 family members with normal HDL-C, and 133 controls); B) a cohort of 113 low- and 133 high-HDL-C subjects from the Health 2000 Health Examination Survey carried out in Finland (Study III); and C) a Finnish cohort of healthy young adult twins (52 monozygotic and 89 dizygotic pairs) (Study IV). RESULTS AND CONCLUSIONS The subjects with familial low HDL-C had a lower preβ-HDL concentration than did controls, and the low-HDL-C subjects displayed a dramatic reduction (50-70%) in the proportion of large HDL2b particles. The subjects with familial low HDL-C had increased carotid atherosclerosis measured as intima-media-thickness (IMT), and HDL2b particles correlated negatively with IMT. The reduction in both key cholesterol acceptors, preβ-HDL and HDL2 particles, supports the concept of impaired reverse cholesterol transport contributing to the higher CHD risk in low-HDL-C subjects. The family members with normal HDL-C and the young adult twins with acquired obesity showed a reduction in large HDL2 particles and an increase in small HDL3 particles, which may be the first changes leading to the lowering of HDL-C. The low-HDL-C subjects had a higher serum apolipoprotein E (apoE) concentration, which correlated positively with the metabolic syndrome components (waist circumference, TG, and glucose), highlighting the need for a better understanding of apoE metabolism in human atherosclerosis. In the twin study, the increase in small HDL3b particles was associated with obesity independent of genetic effects. The heritability estimate, of 73% for HDL-C and 46 to 63% for HDL subspecies, however, demonstrated a strong genetic influence. These results suggest that the relationship between obesity and lipoproteins depends on different elements in each subject. Finally, instead of merely elevating HDL-C, large HDL2 particles and discoidal preβ-HDL particles may provide beneficial targets for HDL-targeted therapy.
Resumo:
In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Positron emission tomography (PET) is a molecular imaging technique that utilises radiopharmaceuticals (radiotracers) labelled with a positron-emitting radionuclide, such as fluorine-18 (18F). Development of a new radiotracer requires an appropriate radiosynthesis method: the most common of which with 18F is nucleophilic substitution with [18F]fluoride ion. The success of the labelling reaction is dependent on various factors such as the reactivity of [18F]fluoride, the structure of the target compound in addition to the chosen solvent. The overall radiosynthesis procedure must be optimised in terms of radiochemical yield and quality of the final product. Therefore, both quantitative and qualitative radioanalytical methods are essential in developing radiosynthesis methods. Furthermore, biological properties of the tracer candidate need to be evaluated by various pre-clinical studies in animal models. In this work, the feasibility of various nucleophilic 18F-fluorination strategies were studied and a labelling method for a novel radiotracer, N-3-[18F]fluoropropyl-2beta-carbomethoxy-3beta-4-fluorophenyl)nortropane ([18F]beta-CFT-FP), was optimised. The effect of solvent was studied by labelling a series of model compounds, 4-(R1-methyl)benzyl R2-benzoates. 18F-Fluorination reactions were carried out both in polar aprotic and protic solvents (tertiary alcohols). Assessment of the 18F-fluorinated products was studied by mass spectrometry (MS) in addition to conventional radiochromatographic methods, using radiosynthesis of 4-[18F]fluoro-N-[2-[1-(2-methoxyphenyl)-1-piperazinyl]ethyl-N-2-pyridinyl-benzamide (p-[18F]MPPF) as a model reaction. Labelling of [18F]beta-CFT-FP was studied using two 18F-fluoroalkylation reagents, [18F]fluoropropyl bromide and [18F]fluoropropyl tosylate, as well as by direct 18F-fluorination of sulfonate ester precursor. Subsequently, the suitability of [18F]beta-CFT-FP for imaging dopamine transporter (DAT) was evaluated by determining its biodistribution in rats. The results showed that protic solvents can be useful co-solvents in aliphatic 18F-fluorinations, especially in the labelling of sulfonate esters. Aromatic 18F-fluorination was not promoted in tert-alcohols. Sensitivity of the ion trap MS was sufficient for the qualitative analysis of the 18F-labelled products; p-[18F]MPPF was identified from the isolated product fraction with a mass-to-charge (m/z) ratio of 435 (i.e. protonated molecule [M+H]+). [18F]beta-CFT-FP was produced most efficiently via [18F]fluoropropyl tosylate, leading to sufficient radiochemical yield and specific radioactivity for PET studies. The ex vivo studies in rats showed fast kinetics as well as the specific uptake of [18F]beta-CFT-FP to the DAT rich brain regions. Thus, it was concluded that [18F]beta-CFT-FP has potential as a radiotracer for imaging DAT by PET.
Resumo:
A Geodesic Constant Method (GCM) is outlined which provides a common approach to ray tracing on quadric cylinders in general, and yields all the surface ray-geometric parameters required in the UTD mutual coupling analysis of conformal antenna arrays in the closed form. The approach permits the incorporation of a shaping parameter which permits the modeling of quadric cylindrical surfaces of desired sharpness/flatness with a common set of equations. The mutual admittance between the slots on a general parabolic cylinder is obtained as an illustration of the applicability of the GCM.