863 resultados para Liver Function Test


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a much anticipated judgment, the Federal Circuit has sought to clarify the standards applicable in determining whether a claimed method constitutes patent-eligible subject matter. In Bilski, the Federal Circuit identified a test to determine whether a patentee has made claims that pre-empt the use of a fundamental principle or an abstract idea or whether those claims cover only a particular application of a fundamental principle or abstract idea. It held that the sole test for determining subject matter eligibility for a claimed process under § 101 is that: (1) it is tied to a particular machine or apparatus, or (2) it transforms a particular article into a different state or thing. The court termed this the “machine-or-transformation test.” In so doing it overruled its earlier State Street decision to the extent that it deemed its “useful, tangible and concrete result” test as inadequate to determine whether an alleged invention recites patent-eligible subject matter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the study of student learning literature, the traditional view holds that when students are faced with heavy workload, poor teaching, and content that they cannot relate to – important aspects of the learning context, they will more likely utilise the surface approach to learning due to stresses, lack of understanding and lack of perceived relevance of the content (Kreber, 2003; Lizzio, Wilson, & Simons, 2002; Ramdsen, 1989; Ramsden, 1992; Trigwell & Prosser, 1991; Vermunt, 2005). For example, in studies involving health and medical sciences students, courses that utilised student-centred, problem-based approaches to teaching and learning were found to elicit a deeper approach to learning than the teacher-centred, transmissive approach (Patel, Groen, & Norman, 1991; Sadlo & Richardson, 2003). It is generally accepted that the line of causation runs from the learning context (or rather students’ self reported data on the learning context) to students’ learning approaches. That is, it is the learning context as revealed by students’ self-reported data that elicit the associated learning behaviour. However, other research studies also found that the same teaching and learning environment can be perceived differently by different students. In a study of students’ perceptions of assessment requirements, Sambell and McDowell (1998) found that students “are active in the reconstruction of the messages and meanings of assessment” (p. 391), and their interpretations are greatly influenced by their past experiences and motivations. In a qualitative study of Hong Kong tertiary students, Kember (2004) found that students using the surface learning approach reported heavier workload than students using the deep learning approach. According to Kember if students learn by extracting meanings from the content and making connections, they will more likely see the higher order intentions embodied in the content and the high cognitive abilities being assessed. On the other hand, if they rote-learn for the graded task, they fail to see the hierarchical relationship in the content and to connect the information. These rote-learners will tend to see the assessment as requiring memorising and regurgitation of a large amount of unconnected knowledge, which explains why they experience a high workload. Kember (2004) thus postulate that it is the learning approach that influences how students perceive workload. Campbell and her colleagues made a similar observation in their interview study of secondary students’ perceptions of teaching in the same classroom (Campbell et al., 2001). The above discussions suggest that students’ learning approaches can influence their perceptions of assessment demands and other aspects of the learning context such as relevance of content and teaching effectiveness. In other words, perceptions of elements in the teaching and learning context are endogenously determined. This study attempted to investigate the causal relationships at the individual level between learning approaches and perceptions of the learning context in economics education. In this study, students’ learning approaches and their perceptions of the learning context were measured. The elements of the learning context investigated include: teaching effectiveness, workload and content. The authors are aware of existence of other elements of the learning context, such as generic skills, goal clarity and career preparation. These aspects, however, were not within the scope of this present study and were therefore not investigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To explore whether people's organ donation consent decisions occur via a reasoned and/or social reaction pathway. --------- Design: We examined prospectively students' and community members' decisions to register consent on a donor register and discuss organ donation wishes with family. --------- Method: Participants completed items assessing theory of planned behaviour (TPB; attitude, subjective norm, perceived behavioural control (PBC)), prototype/willingness model (PWM; donor prototype favourability/similarity, past behaviour), and proposed additional influences (moral norm, self-identity, recipient prototypes) for registering (N=339) and discussing (N=315) intentions/willingness. Participants self-reported their registering (N=177) and discussing (N=166) behaviour 1 month later. The utility of the (1) TPB, (2) PWM, (3) augmented TPB with PWM, and (4) augmented TPB with PWM and extensions was tested using structural equation modelling for registering and discussing intentions/willingness, and logistic regression for behaviour. --------- Results: While the TPB proved a more parsimonious model, fit indices suggested that the other proposed models offered viable options, explaining greater variance in communication intentions/willingness. The TPB, augmented TPB with PWM, and extended augmented TPB with PWM best explained registering and discussing decisions. The proposed and revised PWM also proved an adequate fit for discussing decisions. Respondents with stronger intentions (and PBC for registering) had a higher likelihood of registering and discussing. --------- Conclusions: People's decisions to communicate donation wishes may be better explained via a reasoned pathway (especially for registering); however, discussing involves more reactive elements. The role of moral norm, self-identity, and prototypes as influences predicting communication decisions were highlighted also.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While IS function has gained widespread attention for over two decades, there is little consensus among information systems (IS) researchers and practitioners on how best to evaluate IS function's support performance. This paper reports on preliminary findings of a larger research effort proceeds from a central interest in the importance of evaluating IS function's support in organisations. This study is the first that attempts to re-conceptualise and conceive evaluate IS function's support as a multi- dimensional formative construct. We argue that a holistic measure for evaluating evaluate IS function's support should consist of dimensions that together assess the variety of the support functions and the quality of the support services provided to end-users. Thus, the proposed model consists of two halves, "Variety" and "Quality" within which resides seven dimensions. The Variety half includes five dimensions: Training; Documentation; Data- related Support, Software-related Support; and Hardware-related Support. The Quality half includes two dimensions: IS Support Staff and Support Services Performance. The proposed model is derived using a directed content analysis of 83 studies; from top IS outlets, employing the characteristics of the analytic theory and consistent with formative construct development procedures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, both Distributed Generators (DG) and capacitors are allocated and sized optimally for improving line loss and reliability. The objective function is composed of the investment cost of DGs and capacitors along with loss and reliability which are converted to the genuine dollar. The bus voltage and line current are considered as constraints which should be satisfied during the optimization procedure. Hybrid Particle Swarm Optimization as a heuristic based technique is used as the optimization method. The IEEE 69-bus test system is modified and employed to evaluate the proposed algorithm. The results illustrate that the lowest cost planning is found by optimizing both DGs and capacitors in distribution networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently it has been shown that the consumption of a diet high in saturated fat is associated with impaired insulin sensitivity and increased incidence of type 2 diabetes. In contrast, diets that are high in monounsaturated fatty acids (MUFAs) or polyunsaturated fatty acids (PUFAs), especially very long chain n-3 fatty acids (FAs), are protective against disease. However, the molecular mechanisms by which saturated FAs induce the insulin resistance and hyperglycaemia associated with metabolic syndrome and type 2 diabetes are not clearly defined. It is possible that saturated FAs may act through alternative mechanisms compared to MUFA and PUFA to regulate of hepatic gene expression and metabolism. It is proposed that, like MUFA and PUFA, saturated FAs regulate the transcription of target genes. To test this hypothesis, hepatic gene expression analysis was undertaken in a human hepatoma cell line, Huh-7, after exposure to the saturated FA, palmitate. These experiments showed that palmitate is an effective regulator of gene expression for a wide variety of genes. A total of 162 genes were differentially expressed in response to palmitate. These changes not only affected the expression of genes related to nutrient transport and metabolism, they also extend to other cellular functions including, cytoskeletal architecture, cell growth, protein synthesis and oxidative stress response. In addition, this thesis has shown that palmitate exposure altered the expression patterns of several genes that have previously been identified in the literature as markers of risk of disease development, including CVD, hypertension, obesity and type 2 diabetes. The altered gene expression patterns associated with an increased risk of disease include apolipoprotein-B100 (apo-B100), apo-CIII, plasminogen activator inhibitor 1, insulin-like growth factor-I and insulin-like growth factor binding protein 3. This thesis reports the first observation that palmitate directly signals in cultured human hepatocytes to regulate expression of genes involved in energy metabolism as well as other important genes. Prolonged exposure to long-chain saturated FAs reduces glucose phosphorylation and glycogen synthesis in the liver. Decreased glucose metabolism leads to elevated rates of lipolysis, resulting in increased release of free FAs. Free FAs have a negative effect on insulin action on the liver, which in turn results in increased gluconeogenesis and systemic dyslipidaemia. It has been postulated that disruption of glucose transport and insulin secretion by prolonged excessive FA availability might be a non-genetic factor that has contributed to the staggering rise in prevalence of type 2 diabetes. As glucokinase (GK) is a key regulatory enzyme of hepatic glucose metabolism, changes in its activity may alter flux through the glycolytic and de novo lipogenic pathways and result in hyperglycaemia and ultimately insulin resistance. This thesis investigated the effects of saturated FA on the promoter activity of the glycolytic enzyme, GK, and various transcription factors that may influence the regulation of GK gene expression. These experiments have shown that the saturated FA, palmitate, is capable of decreasing GK promoter activity. In addition, quantitative real-time PCR has shown that palmitate incubation may also regulate GK gene expression through a known FA sensitive transcription factor, sterol regulatory element binding protein-1c (SREBP-1c), which upregulates GK transcription. To parallel the investigations into the mechanisms of FA molecular signalling, further studies of the effect of FAs on metabolic pathway flux were performed. Although certain FAs reduce SREBP-1c transcription in vitro, it is unclear whether this will result in decreased GK activity in vivo where positive effectors of SREBP-1c such as insulin are also present. Under these conditions, it is uncertain if the inhibitory effects of FAs would be overcome by insulin. The effects of a combination of FAs, insulin and glucose on glucose phosphorylation and metabolism in cultured primary rat hepatocytes at concentrations that mimic those in the portal circulation after a meal was examined. It was found that total GK activity was unaffected by an increased concentration of insulin, but palmitate and eicosapentaenoic acid significantly lowered total GK activity in the presence of insulin. Despite the fact that total GK enzyme activity was reduced in response to FA incubation, GK enzyme translocation from the inactive, nuclear bound, to active, cytoplasmic state was unaffected. Interestingly, none of the FAs tested inhibited glucose phosphorylation or the rate of glycolysis when insulin is present. These results suggest that in the presence of insulin the levels of the active, unbound cytoplasmic GK are sufficient to buffer a slight decrease in GK enzyme activity and decreased promoter activity caused by FA exposure. Although a high fat diet has been associated with impaired hepatic glucose metabolism, there is no evidence from this thesis that FAs themselves directly modulate flux through the glycolytic pathway in isolated primary hepatocytes when insulin is also present. Therefore, although FA affected expression of a wide range of genes, including GK, this did not affect glycolytic flux in the presence of insulin. However, it may be possible that a saturated FA-induced decrease in GK enzyme activity when combined with the onset of insulin resistance may promote the dys-regulation of glucose homeostasis and the subsequent development of hyperglycaemia, metabolic syndrome and type 2 diabetes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis discusses various aspects of the integrity monitoring of GPS applied to civil aircraft navigation in different phases of flight. These flight phases include en route, terminal, non-precision approach and precision approach. The thesis includes four major topics: probability problem of GPS navigation service, risk analysis of aircraft precision approach and landing, theoretical analysis of Receiver Autonomous Integrity Monitoring (RAIM) techniques and RAIM availability, and GPS integrity monitoring at a ground reference station. Particular attention is paid to the mathematical aspects of the GPS integrity monitoring system. The research has been built upon the stringent integrity requirements defined by civil aviation community, and concentrates on the capability and performance investigation of practical integrity monitoring systems with rigorous mathematical and statistical concepts and approaches. Major contributions of this research are: • Rigorous integrity and continuity risk analysis for aircraft precision approach. Based on the joint probability density function of the affecting components, the integrity and continuity risks of aircraft precision approach with DGPS were computed. This advanced the conventional method of allocating the risk probability. • A theoretical study of RAIM test power. This is the first time a theoretical study on RAIM test power based on the probability statistical theory has been presented, resulting in a new set of RAIM criteria. • Development of a GPS integrity monitoring and DGPS quality control system based on GPS reference station. A prototype of GPS integrity monitoring and DGPS correction prediction system has been developed and tested, based on the A USN A V GPS base station on the roof of QUT ITE Building.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

LiteSteel Beam (LSB) is a new cold-formed steel beam produced by OneSteel Australian Tube Mills. The new beam is effectively a channel section with two rectangular hollow flanges and a slender web, and is manufactured using a combined cold-forming and electric resistance welding process. OneSteel Australian Tube Mills is promoting the use of LSBs as flexural members in a range of applications, such as floor bearers. When LSBs are used as back to back built-up sections, they are likely to improve their moment capacity and thus extend their applications further. However, the structural behaviour of built-up beams is not well understood. Many steel design codes include guidelines for connecting two channels to form a built-up I-section including the required longitudinal spacing of connections. But these rules were found to be inadequate in some applications. Currently the safe spans of builtup beams are determined based on twice the moment capacity of a single section. Research has shown that these guidelines are conservative. Therefore large scale lateral buckling tests and advanced numerical analyses were undertaken to investigate the flexural behaviour of back to back LSBs connected by fasteners (bolts) at various longitudinal spacings under uniform moment conditions. In this research an experimental investigation was first undertaken to study the flexural behaviour of back to back LSBs including its buckling characteristics. This experimental study included tensile coupon tests, initial geometric imperfection measurements and lateral buckling tests. The initial geometric imperfection measurements taken on several back to back LSB specimens showed that the back to back bolting process is not likely to alter the imperfections, and the measured imperfections are well below the fabrication tolerance limits. Twelve large scale lateral buckling tests were conducted to investigate the behaviour of back to back built-up LSBs with various longitudinal fastener spacings under uniform moment conditions. Tests also included two single LSB specimens. Test results showed that the back to back LSBs gave higher moment capacities in comparison with single LSBs, and the fastener spacing influenced the ultimate moment capacities. As the fastener spacing was reduced the ultimate moment capacities of back to back LSBs increased. Finite element models of back to back LSBs with varying fastener spacings were then developed to conduct a detailed parametric study on the flexural behaviour of back to back built-up LSBs. Two finite element models were developed, namely experimental and ideal finite element models. The models included the complex contact behaviour between LSB web elements and intermittently fastened bolted connections along the web elements. They were validated by comparing their results with experimental results and numerical results obtained from an established buckling analysis program called THIN-WALL. These comparisons showed that the developed models could accurately predict both the elastic lateral distortional buckling moments and the non-linear ultimate moment capacities of back to back LSBs. Therefore the ideal finite element models incorporating ideal simply supported boundary conditions and uniform moment conditions were used in a detailed parametric study on the flexural behaviour of back to back LSB members. In the detailed parametric study, both elastic buckling and nonlinear analyses of back to back LSBs were conducted for 13 LSB sections with varying spans and fastener spacings. Finite element analysis results confirmed that the current design rules in AS/NZS 4600 (SA, 2005) are very conservative while the new design rules developed by Anapayan and Mahendran (2009a) for single LSB members were also found to be conservative. Thus new member capacity design rules were developed for back to back LSB members as a function of non-dimensional member slenderness. New empirical equations were also developed to aid in the calculation of elastic lateral distortional buckling moments of intermittently fastened back to back LSBs. Design guidelines were developed for the maximum fastener spacing of back to back LSBs in order to optimise the use of fasteners. A closer fastener spacing of span/6 was recommended for intermediate spans and some long spans where the influence of fastener spacing was found to be high. In the last phase of this research, a detailed investigation was conducted to investigate the potential use of different types of connections and stiffeners in improving the flexural strength of back to back LSB members. It was found that using transverse web stiffeners was the most cost-effective and simple strengthening method. It is recommended that web stiffeners are used at the supports and every third points within the span, and their thickness is in the range of 3 to 5 mm depending on the size of LSB section. The use of web stiffeners eliminated most of the lateral distortional buckling effects and hence improved the ultimate moment capacities. A suitable design equation was developed to calculate the elastic lateral buckling moments of back to back LSBs with the above recommended web stiffener configuration while the same design rules developed for unstiffened back to back LSBs were recommended to calculate the ultimate moment capacities.