939 resultados para curve


Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE Public health organizations recommend that preschool-aged children accumulate at least 3h of physical activity (PA) daily. Objective monitoring using pedometers offers an opportunity to measure preschooler's PA and assess compliance with this recommendation. The purpose of this study was to derive step-based recommendations consistent with the 3h PA recommendation for preschool-aged children. METHOD The study sample comprised 916 preschool-aged children, aged 3 to 6years (mean age=5.0+/-0.8years). Children were recruited from kindergartens located in Portugal, between 2009 and 2013. Children wore an ActiGraph GT1M accelerometer that measured PA intensity and steps per day simultaneously over a 7-day monitoring period. Receiver operating characteristic (ROC) curve analysis was used to identify the daily step count threshold associated with meeting the daily 3hour PA recommendation. RESULTS A significant correlation was observed between minutes of total PA and steps per day (r=0.76, p<0.001). The optimal step count for >/=3h of total PA was 9099 steps per day (sensitivity (90%) and specificity (66%)) with area under the ROC curve=0.86 (95% CI: 0.84 to 0.88). CONCLUSION Preschool-aged children who accumulate less than 9000 steps per day may be considered Insufficiently Active.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ac susceptibility and electrical resistivity studies on polycrystalline Fe80-xNixCr20 (21 \leq x \leq 30) alloys, with x=21, 23, 26, and 30, between 4.2 and 80 K, are reported. A previous dc magnetization study indicated the presence of ferro-spin-glass mixed-phase behavior in x=23 and 26 alloys while the alloys with x=21 and 30 were found to be spin-glass and ferromagnetic, respectively. The present ac susceptibility results support the above picture. In the electrical resistivity study, a low-temperature minimum in the resistivity-temperature curve is observed in all the alloys except the ferromagnetic one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Masonry under compression is affected by the properties of its constituents and their interfaces. In spite of extensive investigations of the behaviour of masonry under compression, the information in the literature cannot be regarded as comprehensive due to ongoing inventions of new generation products – for example, polymer modified thin layer mortared masonry and drystack masonry. As comprehensive experimental studies are very expensive, an analytical model inspired by damage mechanics is developed and applied to the prediction of the compressive behaviour of masonry in this paper. The model incorporates a parabolic progressively softening stress-strain curve for the units and a progressively stiffening stress-strain curve until a threshold strain for the combined mortar and the unit-mortar interfaces is reached. The model simulates the mutual constraints imposed by each of these constituents through their respective tensile and compressive behaviour and volumetric changes. The advantage of the model is that it requires only the properties of the constituents and considers masonry as a continuum and computes the average properties of the composite masonry prisms/wallettes; it does not require discretisation of prism or wallette similar to the finite element methods. The capability of the model in capturing the phenomenological behaviour of masonry with appropriate elastic response, stiffness degradation and post peak softening is presented through numerical examples. The fitting of the experimental data to the model parameters is demonstrated through calibration of some selected test data on units and mortar from the literature; the calibrated model is shown to predict the responses of the experimentally determined masonry built using the corresponding units and mortar quite well. Through a series of sensitivity studies, the model is also shown to predict the masonry strength appropriately for changes to the properties of the units and mortar, the mortar joint thickness and the ratio of the height of unit to mortar joint thickness. The unit strength is shown to affect the masonry strength significantly. Although the mortar strength has only a marginal effect, reduction in mortar joint thickness is shown to have a profound effect on the masonry strength. The results obtained from the model are compared with the various provisions in the Australian Masonry Structures Standard AS3700 (2011) and Eurocode 6.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The multiplier ideals of an ideal in a regular local ring form a family of ideals parametrized by non-negative rational numbers. As the rational number increases the corresponding multiplier ideal remains unchanged until at some point it gets strictly smaller. A rational number where this kind of diminishing occurs is called a jumping number of the ideal. In this manuscript we shall give an explicit formula for the jumping numbers of a simple complete ideal in a two dimensional regular local ring. In particular, we obtain a formula for the jumping numbers of an analytically irreducible plane curve. We then show that the jumping numbers determine the equisingularity class of the curve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper the method of ultraspherical polynomial approximation is applied to study the steady-state response in forced oscillations of a third-order non-linear system. The non-linear function is expanded in ultraspherical polynomials and the expansion is restricted to the linear term. The equation for the response curve is obtained by using the linearized equation and the results are presented graphically. The agreement between the approximate solution and the analog computer solution is satisfactory. The problem of stability is not dealt with in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The transport of glycine in vitro into the silk glands of the silkworm has been studied. Glycine accumulates inside the tissue to a concentration higher than that present outside, indicating an active transport mechanism. The kinetics of uptake show a biphasic curve and two apparent Km values for accumulation, 0.33 mM and 5.00 mM. The effect of inhibitors on the energy metabolism of glycine transport is inconclusive. Exchange studies indicate the existence of two pools inside the gland, one that is easily removed by exchange and osmotic shock, and the other which is not. The results obtained conform with the carrier model of Britten and McClure concerning the amino-acid pool in E. coli.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of the dispersion equation for surface magnetoplasmons in the Faraday configuration for the degenerate case of decaying constants being equal is given from the point of view of understanding the non-existence of the “degenerate modes”. This analysis also shows that there exist well defined “degenerate points” on the dispersion curve with electromagnetic fields varying linearly over small distances taken away from the interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates the relationship between per capita carbon dioxide (CO2) emissions and per capita GDP in Australia, while controlling for technological state as measured by multifactor productivity and export of black coal. Although technological progress seems to play a critical role in achieving long term goals of CO2 reduction and economic growth, empirical studies have often considered time trend to proxy technological change. However, as discoveries and diffusion of new technologies may not progress smoothly with time, the assumption of a deterministic technological progress may be incorrect in the long run. The use of multifactor productivity as a measure of technological state, therefore, overcomes the limitations and provides practical policy directions. This study uses recently developed bound-testing approach, which is complemented by Johansen- Juselius maximum likelihood approach and a reasonably large sample size to investigate the cointegration relationship. Both of the techniques suggest that cointegration relationship exists among the variables. The long-run and short-run coefficients of CO2 emissions function is estimated using ARDL approach. The empirical findings in the study show evidence of the existence of Environmental Kuznets Curve type relationship for per capita CO2 emissions in the Australian context. The technology as measured by the multifactor productivity, however, is not found as an influencing variable in emissionsincome trajectory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By using the same current-time (I-t) curves, electrochemical kinetic parameters are determined by two methods, (a) using the ratio of current at a given potential to the diffusion-controlled limiting current and (b) curve fitting method, for the reduction of Cu(II)–CyDTA complex. The analysis by the method (a) shows that the rate determining step involves only one electron although the overall reduction of the complex involves two electrons suggesting thereby the stepwise reduction of the complex. The nature of I-t curves suggests the adsorption of intermediate species at the electrode surface. Under these circumstances more reliable kinetic parameters can be obtained by the method (a) compared to that of (b). Similar observations are found in the case of reduction of Cu(II)–EDTA complex.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Skin temperature assessment is a promising modality for early detection of diabetic foot problems, but its diagnostic value has not been studied. Our aims were to investigate the diagnostic value of different cutoff skin temperature values for detecting diabetes-related foot complications such as ulceration, infection, and Charcot foot and to determine urgency of treatment in case of diagnosed infection or a red-hot swollen foot. Materials and Methods The plantar foot surfaces of 54 patients with diabetes visiting the outpatient foot clinic were imaged with an infrared camera. Nine patients had complications requiring immediate treatment, 25 patients had complications requiring non-immediate treatment, and 20 patients had no complications requiring treatment. Average pixel temperature was calculated for six predefined spots and for the whole foot. We calculated the area under the receiver operating characteristic curve for different cutoff skin temperature values using clinical assessment as reference and defined the sensitivity and specificity for the most optimal cutoff temperature value. Mean temperature difference between feet was analyzed using the Kruskal–Wallis tests. Results The most optimal cutoff skin temperature value for detection of diabetes-related foot complications was a 2.2°C difference between contralateral spots (sensitivity, 76%; specificity, 40%). The most optimal cutoff skin temperature value for determining urgency of treatment was a 1.35°C difference between the mean temperature of the left and right foot (sensitivity, 89%; specificity, 78%). Conclusions Detection of diabetes-related foot complications based on local skin temperature assessment is hindered by low diagnostic values. Mean temperature difference between two feet may be an adequate marker for determining urgency of treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective To develop the DCDDaily, an instrument for objective and standardized clinical assessment of capacity in activities of daily living (ADL) in children with developmental coordination disorder (DCD), and to investigate its usability, reliability, and validity. Subjects Five to eight-year-old children with and without DCD. Main measures The DCDDaily was developed based on thorough review of the literature and extensive expert involvement. To investigate the usability (assessment time and feasibility), reliability (internal consistency and repeatability), and validity (concurrent and discriminant validity) of the DCDDaily, children were assessed with the DCDDaily and the Movement Assessment Battery for Children-2 Test, and their parents filled in the Movement Assessment Battery for Children-2 Checklist and Developmental Coordination Disorder Questionnaire. Results 459 children were assessed (DCD group, n = 55; normative reference group, n = 404). Assessment was possible within 30 minutes and in any clinical setting. For internal consistency, Cronbach’s α = 0.83. Intraclass correlation = 0.87 for test–retest reliability and 0.89 for inter-rater reliability. Concurrent correlations with Movement Assessment Battery for Children-2 Test and questionnaires were ρ = −0.494, 0.239, and −0.284, p < 0.001. Discriminant validity measures showed significantly worse performance in the DCD group than in the control group (mean (SD) score 33 (5.6) versus 26 (4.3), p < 0.001). The area under curve characteristic = 0.872, sensitivity and specificity were 80%. Conclusions The DCDDaily is a valid and reliable instrument for clinical assessment of capacity in ADL, that is feasible for use in clinical practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 1956 Whitham gave a nonlinear theory for computing the intensity of an acoustic pulse of an arbitrary shape. The theory has been used very successfully in computing the intensity of the sonic bang produced by a supersonic plane. [4.] derived an approximate quasi-linear equation for the propagation of a short wave in a compressible medium. These two methods are essentially nonlinear approximations of the perturbation equations of the system of gas-dynamic equations in the neighborhood of a bicharacteristic curve (or rays) for weak unsteady disturbances superimposed on a given steady solution. In this paper we have derived an approximate quasi-linear equation which is an approximation of perturbation equations in the neighborhood of a bicharacteristic curve for a weak pulse governed by a general system of first order quasi-linear partial differential equations in m + 1 independent variables (t, x1,…, xm) and derived Gubkin's result as a particular case when the system of equations consists of the equations of an unsteady motion of a compressible gas. We have also discussed the form of the approximate equation describing the waves propagating upsteam in an arbitrary multidimensional transonic flow.