808 resultados para Theoretical basis


Relevância:

70.00% 70.00%

Publicador:

Resumo:

While organizations strive to leverage the vast information generated daily from social media platforms, and decision makers are keen to identify and exploit its value, the quality of this information remains uncertain. Past research on information quality criteria and evaluation issues in social media is largely disparate, incomparable and lacking any common theoretical basis. In attention to this gap, this study adapts existing guidelines and exemplars of construct conceptualization in information systems research, to deductively define information quality and related criteria in the social media context. Building on a notion of information derived from semiotic theory, this paper suggests a general conceptualization of information quality in the social media context that can be used in future research to develop more context specific conceptual models.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

C(13)H(16)Cl(2)Te,M(r)=370.76,P2(1)/a, a = 8.1833(8), b = 8.4163(8), c = 20.787(2) A, beta = 99.52(1)degrees, Z = 4, R(1) = 0,0275. The primary coordination around the Te(IV) atom is consistent with a pseudo-trigonal bipyramidal bond configuration with two Cl atoms occupying axial positions while the C atoms and the lone pair of electrons occupy the equatorial positions. The Te(IV) atom is involved in an intermolecular secondary interaction resulting in the self assembly of zigzag-chains supramolecular array. In order to determine the theoretical basis set for the Te atom which leads to the best agreement with the experimental data, a large series of geometry optimizations were performed on dichloro dimethyl Te(IV), as a model compound, and the results compared with the mean distances and angles obtained from 45 X-ray structures. The Ahlrichs basis set plus the Hay & Wadt ECP was selected and used for a series of calculations performed on the title compound.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Abstract Background Blood leukocytes constitute two interchangeable sub-populations, the marginated and circulating pools. These two sub-compartments are found in normal conditions and are potentially affected by non-normal situations, either pathological or physiological. The dynamics between the compartments is governed by rate constants of margination (M) and return to circulation (R). Therefore, estimates of M and R may prove of great importance to a deeper understanding of many conditions. However, there has been a lack of formalism in order to approach such estimates. The few attempts to furnish an estimation of M and R neither rely on clearly stated models that precisely say which rate constant is under estimation nor recognize which factors may influence the estimation. Results The returning of the blood pools to a steady-state value after a perturbation (e.g., epinephrine injection) was modeled by a second-order differential equation. This equation has two eigenvalues, related to a fast- and to a slow-component of the dynamics. The model makes it possible to identify that these components are partitioned into three constants: R, M and SB; where SB is a time-invariant exit to tissues rate constant. Three examples of the computations are worked and a tentative estimation of R for mouse monocytes is presented. Conclusions This study establishes a firm theoretical basis for the estimation of the rate constants of the dynamics between the blood sub-compartments of white cells. It shows, for the first time, that the estimation must also take into account the exit to tissues rate constant, SB.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Students are now involved in a vastly different textual landscape than many English scholars, one that relies on the “reading” and interpretation of multiple channels of simultaneous information. As a response to these new kinds of literate practices, my dissertation adds to the growing body of research on multimodal literacies, narratology in new media, and rhetoric through an examination of the place of video games in English teaching and research. I describe in this dissertation a hybridized theoretical basis for incorporating video games in English classrooms. This framework for textual analysis includes elements from narrative theory in literary study, rhetorical theory, and literacy theory, and when combined to account for the multiple modalities and complexities of gaming, can provide new insights about those theories and practices across all kinds of media, whether in written texts, films, or video games. In creating this framework, I hope to encourage students to view texts from a meta-level perspective, encompassing textual construction, use, and interpretation. In order to foster meta-level learning in an English course, I use specific theoretical frameworks from the fields of literary studies, narratology, film theory, aural theory, reader-response criticism, game studies, and multiliteracies theory to analyze a particular video game: World of Goo. These theoretical frameworks inform pedagogical practices used in the classroom for textual analysis of multiple media. Examining a video game from these perspectives, I use analytical methods from each, including close reading, explication, textual analysis, and individual elements of multiliteracies theory and pedagogy. In undertaking an in-depth analysis of World of Goo, I demonstrate the possibilities for classroom instruction with a complex blend of theories and pedagogies in English courses. This blend of theories and practices is meant to foster literacy learning across media, helping students develop metaknowledge of their own literate practices in multiple modes. Finally, I outline a design for a multiliteracies course that would allow English scholars to use video games along with other texts to interrogate texts as systems of information. In doing so, students can hopefully view and transform systems in their own lives as audiences, citizens, and workers.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The theoretical basis for evaluating shear strength in rock joints is presented and used to derive an equation that governs the relationship between tangential and normal stress on the joint during situations of slippage between the joint faces. The dependent variables include geometric dilatancy, the instantaneous friction angle, and a parameter that considers joint surface roughness. The effect roughness is studied, and the aforementioned formula is used to analyse joints under different conditions. A mathematical expression is deduced that explains Barton's value for the joint roughness coefficient (JRC) according to the roughness geometry. In particular, when the Hoek and Brown failure criterion is used for a rock in the contact with the surface roughness plane, it is possible to determine the shear strength of the joint as a function of the relationship between the uniaxial compressive strength of the wall with the normal stress acting on the wall. Finally, theoretical results obtained for the geometry of a three-dimensional joint are compared with those of the Barton's formulation

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

By way of response to Professor Duncan's article,1 this article examines the theoretical basis for the implication of contractual terms, particularly the implication of a term at law. In this regard the recent decision of Barrett J in Overlook v Foxtel [2002] NSWSC 17 is considered, to the extent that it provides guidance concerning the implication of an obligation of good faith in the context of a commercial contract. A number of observations are made which may be considered likely to have application to the relationship of commercial landlord and tenant. The conclusion reached is that although the commercial landlord and tenant contractual relationship is highly regulated, this may not deny a remedy to a tenant who is the victim of a landlord's 'bad faith'. Finally, the article concludes by considering the extent to which it may be possible to contractually exclude the implied obligation of good faith.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sets out a system of corporate governance regulation, aimed at combining legal and social methods of governing director behaviour and at creating a framework flexible enough to accommodate different business and ethical cultures. Outlines the theoretical basis of corporate governance and the broad responsibilities of directors, and discusses the extent to which they can and should be regulated. Discusses the constitution of a regulatory framework encompassing law, soft law and best practice, and ethics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The worldwide organ shortage occurs despite people’s positive organ donation attitudes. The discrepancy between attitudes and behaviour is evident in Australia particularly, with widespread public support for organ donation but low donation and communication rates. This problem is compounded further by the paucity of theoretically based research to improve our understanding of people’s organ donation decisions. This program of research contributes to our knowledge of individual decision making processes for three aspects of organ donation: (1) posthumous (upon death) donation, (2) living donation (to a known and unknown recipient), and (3) providing consent for donation by communicating donation wishes on an organ donor consent register (registering) and discussing the donation decision with significant others (discussing). The research program used extended versions of the Theory of Planned Behaviour (TPB) and the Prototype/Willingness Model (PWM), incorporating additional influences (moral norm, self-identity, organ recipient prototypes), to explicate the relationship between people’s positive attitudes and low rates of organ donation behaviours. Adopting the TPB and PWM (and their extensions) as a theoretical basis overcomes several key limitations of the extant organ donation literature including the often atheoretical nature of organ donation research, thefocus on individual difference factors to construct organ donor profiles and the omission of important psychosocial influences (e.g., control perceptions, moral values) that may impact on people’s decision-making in this context. In addition, the use of the TPB and PWM adds further to our understanding of the decision making process for communicating organ donation wishes. Specifically, the extent to which people’s registering and discussing decisions may be explained by a reasoned and/or a reactive decision making pathway is examined (Stage 3) with the novel application of the TPB augmented with the social reaction pathway in the PWM. This program of research was conducted in three discrete stages: a qualitative stage (Stage 1), a quantitative stage with extended models (Stage 2), and a quantitative stage with augmented models (Stage 3). The findings of the research program are reported in nine papers which are presented according to the three aspects of organ donation examined (posthumous donation, living donation, and providing consent for donation by registering or discussing the donation preference). Stage One of the research program comprised qualitative focus groups/interviews with university students and community members (N = 54) (Papers 1 and 2). Drawing broadly on the TPB framework (Paper 1), content analysed responses revealed people’s commonly held beliefs about the advantages and disadvantages (e.g., prolonging/saving life), important people or groups (e.g., family), and barriers and motivators (e.g., a family’s objection to donation), related to living and posthumous organ donation. Guided by a PWM perspective, Paper Two identified people’s commonly held perceptions of organ donors (e.g., altruistic and giving), non-donors (e.g., self-absorbed and unaware), and transplant recipients (e.g., unfortunate, and in some cases responsible/blameworthy for their predicament). Stage Two encompassed quantitative examinations of people’s decision makingfor living (Papers 3 and 4) and posthumous (Paper 5) organ donation, and for registering and discussing donation wishes (Papers 6 to 8) to test extensions to both the TPB and PWM. Comparisons of health students’ (N = 487) motivations and willingness for living related and anonymous donation (Paper 3) revealed that a person’s donor identity, attitude, past blood donation, and knowing a posthumous donor were four common determinants of willingness, with the results highlighting students’ identification as a living donor as an important motive. An extended PWM is presented in Papers Four and Five. University students’ (N = 284) willingness for living related and anonymous donation was tested in Paper Four with attitude, subjective norm, donor prototype similarity, and moral norm (but not donor prototype favourability) predicting students’ willingness to donate organs in both living situations. Students’ and community members’ (N = 471) posthumous organ donation willingness was assessed in Paper Five with attitude, subjective norm, past behaviour, moral norm, self-identity, and prior blood donation all significantly directly predicting posthumous donation willingness, with only an indirect role for organ donor prototype evaluations. The results of two studies examining people’s decisions to register and/or discuss their organ donation wishes are reported in Paper Six. People’s (N = 24) commonly held beliefs about communicating their organ donation wishes were explored initially in a TPB based qualitative elicitation study. The TPB belief determinants of intentions to register and discuss the donation preference were then assessed for people who had not previously communicated their donation wishes (N = 123). Behavioural and normative beliefs were important determinants of registering and discussing intentions; however, control beliefs influenced people’s registering intentions only. Paper Seven represented the first empirical test of the role of organ transplant recipient prototypes (i.e., perceptions of organ transplant recipients) in people’s (N = 465) decisions to register consent for organ donation. Two factors, Substance Use and Responsibility, were identified and Responsibility predicted people’s organ donor registration status. Results demonstrated that unregistered respondents were the most likely to evaluate transplant recipients negatively. Paper Eight established the role of organ donor prototype evaluations, within an extended TPB model, in predicting students’ and community members’ registering (n = 359) and discussing (n = 282) decisions. Results supported the utility of an extended TPB and suggested a role for donor prototype evaluations in predicting people’s discussing intentions only. Strong intentions to discuss donation wishes increased the likelihood that respondents reported discussing their decision 1-month later. Stage Three of the research program comprised an examination of augmented models (Paper 9). A test of the TPB augmented with elements from the social reaction pathway in the PWM, and extensions to these models was conducted to explore whether people’s registering (N = 339) and discussing (N = 315) decisions are explained via a reasoned (intention) and/or social reaction (willingness) pathway. Results suggested that people’s decisions to communicate their organ donation wishes may be better explained via the reasoned pathway, particularly for registering consent; however, discussing also involves reactive elements. Overall, the current research program represents an important step toward clarifying the relationship between people’s positive organ donation attitudes but low rates of organ donation and communication behaviours. Support has been demonstrated for the use of extensions to two complementary theories, the TPB and PWM, which can inform future research aiming to explicate further the organ donation attitude-behaviour relationship. The focus on a range of organ donation behaviours enables the identification of key targets for future interventions encouraging people’s posthumous and living donation decisions, and communication of their organ donation preference.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With increasing pressure to provide environmentally responsible infrastructure products and services, stakeholders are putting significant foci on the early identification of financial viability and outcome of infrastructure projects. Traditionally, there has been an imbalance between sustainable measures and project budget. On one hand, the industry tends to employ the first-cost mentality and approach to developing infrastructure projects. On the other, environmental experts and technology innovators often push for the ultimately green products and systems without much of a concern for cost. This situation is being quickly changed as the industry is under pressure to continue to return profit, while better adapting to current and emerging global issues of sustainability. For the infrastructure sector to contribute to sustainable development, it will need to increase value and efficiency. Thus, there is a great need for tools that will enable decision makers evaluate competing initiatives and identify the most sustainable approaches to procuring infrastructure projects. In order to ensure that these objectives are achieved, the concept of life-cycle costing analysis (LCCA) will play significant roles in the economics of an infrastructure project. Recently, a few research initiatives have applied the LCCA models for road infrastructure that focused on the traditional economics of a project. There is little coverage of life-cycle costing as a method to evaluate the criteria and assess the economic implications of pursuing sustainability in road infrastructure projects. To rectify this problem, this paper reviews the theoretical basis of previous LCCA models before discussing their inability to determinate the sustainability indicators in road infrastructure project. It then introduces an on-going research aimed at developing a new model to integrate the various new cost elements based on the sustainability indicators with the traditional and proven LCCA approach. It is expected that the research will generate a working model for sustainability based life-cycle cost analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bioelectrical impedance analysis, (BIA), is a method of body composition analysis first investigated in 1962 which has recently received much attention by a number of research groups. The reasons for this recent interest are its advantages, (viz: inexpensive, non-invasive and portable) and also the increasing interest in the diagnostic value of body composition analysis. The concept utilised by BIA to predict body water volumes is the proportional relationship for a simple cylindrical conductor, (volume oc length2/resistance), which allows the volume to be predicted from the measured resistance and length. Most of the research to date has measured the body's resistance to the passage of a 50· kHz AC current to predict total body water, (TBW). Several research groups have investigated the application of AC currents at lower frequencies, (eg 5 kHz), to predict extracellular water, (ECW). However all research to date using BIA to predict body water volumes has used the impedance measured at a discrete frequency or frequencies. This thesis investigates the variation of impedance and phase of biological systems over a range of frequencies and describes the development of a swept frequency bioimpedance meter which measures impedance and phase at 496 frequencies ranging from 4 kHz to 1 MHz. The impedance of any biological system varies with the frequency of the applied current. The graph of reactance vs resistance yields a circular arc with the resistance decreasing with increasing frequency and reactance increasing from zero to a maximum then decreasing to zero. Computer programs were written to analyse the measured impedance spectrum and determine the impedance, Zc, at the characteristic frequency, (the frequency at which the reactance is a maximum). The fitted locus of the measured data was extrapolated to determine the resistance, Ro, at zero frequency; a value that cannot be measured directly using surface electrodes. The explanation of the theoretical basis for selecting these impedance values (Zc and Ro), to predict TBW and ECW is presented. Studies were conducted on a group of normal healthy animals, (n=42), in which TBW and ECW were determined by the gold standard of isotope dilution. The prediction quotients L2/Zc and L2/Ro, (L=length), yielded standard errors of 4.2% and 3.2% respectively, and were found to be significantly better than previously reported, empirically determined prediction quotients derived from measurements at a single frequency. The prediction equations established in this group of normal healthy animals were applied to a group of animals with abnormally low fluid levels, (n=20), and also to a group with an abnormal balance of extra-cellular to intracellular fluids, (n=20). In both cases the equations using L2/Zc and L2/Ro accurately and precisely predicted TBW and ECW. This demonstrated that the technique developed using multiple frequency bioelectrical impedance analysis, (MFBIA), can accurately predict both TBW and ECW in both normal and abnormal animals, (with standard errors of the estimate of 6% and 3% for TBW and ECW respectively). Isotope dilution techniques were used to determine TBW and ECW in a group of 60 healthy human subjects, (male. and female, aged between 18 and 45). Whole body impedance measurements were recorded on each subject using the MFBIA technique and the correlations between body water volumes, (TBW and ECW), and heighe/impedance, (for all measured frequencies), were compared. The prediction quotients H2/Zc and H2/Ro, (H=height), again yielded the highest correlation with TBW and ECW respectively with corresponding standard errors of 5.2% and 10%. The values of the correlation coefficients obtained in this study were very similar to those recently reported by others. It was also observed that in healthy human subjects the impedance measured at virtually any frequency yielded correlations not significantly different from those obtained from the MFBIA quotients. This phenomenon has been reported by other research groups and emphasises the need to validate the technique by investigating its application in one or more groups with abnormalities in fluid levels. The clinical application of MFBIA was trialled and its capability of detecting lymphoedema, (an excess of extracellular fluid), was investigated. The MFBIA technique was demonstrated to be significantly more sensitive, (P<.05), in detecting lymphoedema than the current technique of circumferential measurements. MFBIA was also shown to provide valuable information describing the changes in the quantity of muscle mass of the patient during the course of the treatment. The determination of body composition, (viz TBW and ECW), by MFBIA has been shown to be a significant improvement on previous bioelectrical impedance techniques. The merit of the MFBIA technique is evidenced in its accurate, precise and valid application in animal groups with a wide variation in body fluid volumes and balances. The multiple frequency bioelectrical impedance analysis technique developed in this study provides accurate and precise estimates of body composition, (viz TBW and ECW), regardless of the individual's state of health.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.