952 resultados para general anesthesia
Resumo:
The 6-item Kessler Psychological Distress Scale (K6; Kessler et al., 2002) is a screener for psychological distress that has robust psychometric properties among adults. Given that a significant proportion of adolescents experience mental illness, there is a need for measures that accurately and reliably screen for mental disorders in this age group. This study examined the psychometric properties of the K6 in a large general population sample of adolescents (N = 4,434; mean age = 13.5 years; 44.6% male). Factor analyses were conducted to examine the dimensionality of the K6 in adolescents and to investigate sex-based measurement invariance. This study also evaluated the K6 as a predictor of scores on the Strengths and Difficulties Questionnaire (SDQ; Goodman, 1997). The K6 demonstrated high levels of internal consistency, with the 6 items loading primarily on 1 factor. Consistent with previous research, females reported higher mean levels of psychological distress when compared with males. The identification of sex-based measurement noninvariance in the item thresholds indicated that these mean differences most likely represented reporting bias in the K6 items rather than true differences in the underlying psychological distress construct. The K6 was a fair to good predictor of abnormal scores on the SDQ, but predictive utility was relatively low among males. Future research needs to focus on refining and augmenting the K6 scale to maximize its utility in adolescents. (PsycINFO Database Record (c) 2015 APA, all rights reserved)
Resumo:
Objective: To nationally trial the Primary Care Practice Improvement Tool (PC-PIT), an organisational performance improvement tool previously co-created with Australian primary care practices to increase their focus on relevant quality improvement (QI) activities. Design: The study was conducted from March to December 2015 with volunteer general practices from a range of Australian primary care settings. We used a mixed-methods approach in two parts. Part 1 involved staff in Australian primary care practices assessing how they perceived their practice met (or did not meet) each of the 13 PC-PIT elements of high-performing practices, using a 1–5 Likert scale. In Part 2, two external raters conducted an independent practice visit to independently and objectively assess the subjective practice assessment from Part 1 against objective indicators for the 13 elements, using the same 1–5 Likert scale. Concordance between the raters was determined by comparing their ratings. In-depth interviews conducted during the independent practice visits explored practice managers’ experiences and perceived support and resource needs to undertake organisational improvement in practice. Results: Data were available for 34 general practices participating in Part 1. For Part 2, independent practice visits and the inter-rater comparison were conducted for a purposeful sample of 19 of the 34 practices. Overall concordance between the two raters for each of the assessed elements was excellent. Three practice types across a continuum of higher- to lower-scoring practices were identified, with each using the PC-PIT in a unique way. During the in-depth interviews, practice managers identified benefits of having additional QI tools that relate to the PC-PIT elements. Conclusions: The PC-PIT is an organisational performance tool that is acceptable, valid and relevant to our range of partners and the end users (general practices). Work is continuing with our partners and end users to embed the PC-PIT in existing organisational improvement programs.
Resumo:
In this paper we have studied the flow of a micropolar fluid, whose constitutive equations were given by Eringen, in two dimensional plane flow. In two notes, we have discussed the validity of the boundary condition v=a ω and its effect on the entire flow field. We have restricted our study to the case when Stokes' approximation is valid, i. e. slow motion for it is difficult to uncouple the equations in the most general case.
Resumo:
It has been shown that the conventional practice of designing a compensated hot wire amplifier with a fixed ceiling to floor ratio results in considerable and unnecessary increase in noise level at compensation settings other than optimum (which is at the maximum compensation at the highest frequency of interest). The optimum ceiling to floor ratio has been estimated to be between 1.5-2.0 ωmaxM. Application of the above considerations to an amplifier in which the ceiling to floor ratio is optimized at each compensation setting (for a given amplifier band-width), shows the usefulness of the method in improving the signal to noise ratio.
Resumo:
Chenodeoxycholic acid based PET sensors for alkali metal ions have been immobilized on Merrifield resin and on Tentagel. The fluorescence of the sensor beads is enhanced upon binding the cations. The modular nature of the sensor allows designing different sensors based on this concept.
Resumo:
In the (Bi,Pb)-Sr-Cu-O system we have examined many compositions which are either metallic or semiconducting. In the Bi2-xPbx(Ca, Sr)n+1 Cun O2n+4+δ system, we have established the superconducting properties of the n = 1 to 4 members. The Tc increases from n = 1 to 3 and does not increase further when n = 4. In Bi2Ca1-x,YxSr2Cu2Oy, the Tc decreases with increase in x.
Resumo:
The Shannon cipher system is studied in the context of general sources using a notion of computational secrecy introduced by Merhav & Arikan. Bounds are derived on limiting exponents of guessing moments for general sources. The bounds are shown to be tight for iid, Markov, and unifilar sources, thus recovering some known results. A close relationship between error exponents and correct decoding exponents formfixed rate source compression on the one hand and exponents for guessing moments on the other hand is established.
Resumo:
In this article, a general definition of the process average temperature has been developed, and the impact of the various dissipative mechanisms on 1/COP of the chiller evaluated. The present component-by-component black box analysis removes the assumptions regarding the generator outlet temperature(s) and the component effective thermal conductances. Mass transfer resistance is also incorporated into the absorber analysis to arrive at a more realistic upper limit to the cooling capacity. Finally, the theoretical foundation for the absorption chiller T-s diagram is derived. This diagrammatic approach only requires the inlet and outlet conditions of the chiller components and can be employed as a practical tool for system analysis and comparison. (C) 2000 Elsevier Science Ltd and IIR. All rights reserved.
Resumo:
A general method for generation of base-pairs in a curved DNA structure, for any prescribed values of helical parameters--unit rise (h), unit twist (theta), wedge roll (theta R) and wedge tilt (theta T), propeller twist (theta p) and displacement (D) is described. Its application for generation of uniform as well curved structures is also illustrated with some representative examples. An interesting relationship is observed between helical twist (theta), base-pair parameters theta x, theta y and the wedge parameters theta R, theta T, which has important consequences for the description and estimation of DNA curvature.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
Vuorokausivirtaaman ennustaminen yhdyskuntien vesi- ja viemärilaitosten yleissuunnittelussa.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).