855 resultados para quantifying heteroskedasticity
Resumo:
A service-oriented system is composed of independent software units, namely services, that interact with one another exclusively through message exchanges. The proper functioning of such system depends on whether or not each individual service behaves as the other services expect it to behave. Since services may be developed and operated independently, it is unrealistic to assume that this is always the case. This article addresses the problem of checking and quantifying how much the actual behavior of a service, as recorded in message logs, conforms to the expected behavior as specified in a process model.We consider the case where the expected behavior is defined using the BPEL industry standard (Business Process Execution Language for Web Services). BPEL process definitions are translated into Petri nets and Petri net-based conformance checking techniques are applied to derive two complementary indicators of conformance: fitness and appropriateness. The approach has been implemented in a toolset for business process analysis and mining, namely ProM, and has been tested in an environment comprising multiple Oracle BPEL servers.
Resumo:
With a focus on understanding the overall effect of DFM on human factors aspects, DFM/DFA literature was systematically searched, reviewed and critically assessed. The influence of DFM on work organization is analysed using examples from literature, with the aim of quantifying consequences on work performance, job satisfaction and human work load where possible. It is also shown that job enlargement through DFM tasks increases the workload for the Product Designer, who is on the critical path of the engineering process. Without taking measures to counterbalance this higher workload of the Product Designer, DFM projects in complex engineering environments are likely to fail.
Resumo:
When compared with other arthoplasties, Total Ankle Joint Replacement (TAR) is much less successful. Attempts to remedy this situation by modifying the implant design, for example by making its form more akin to the original ankle anatomy, have largely met with failure. One of the major obstacles is a gap in current knowledge relating to ankle joint force. Specifically this is the lack of reliable data quantifying forces and moments acting on the ankle, in both the healthy and diseased joints. The limited data that does exist is thought to be inaccurate [1] and is based upon simplistic two dimensional discrete and outdated techniques.
Resumo:
Purpose. To devise and validate artist-rendered grading scales for contact lens complications Methods. Each of eight tissue complications of contact lens wear (listed under 'Results') was painted by a skilled ophthalmic artist (Terry R. Tarrant) in five grades of severity: 0 (normal), 1 (trace), 2 (mild), 3 (moderate) and 4 (severe). A representative slit lamp photograph of a tissue response of each of the eight complications was shown to 404 contact lens practitioners who had never before used clinical grading scales. The practitioners were asked to grade each tissue response to the nearest 0.1 grade unit by interpolation. Results. The standard deviation (± s.d.) of the 404 responses for each tissue complication is tabulated below:_ing_ 0.5 Endothelial pplymegethisjij-4 0.7 Epithelial microcysts 0.5 Endothelial blebs_ 0.4 Stromal edema_onjunctiva! hyperemia 0.4 Stromal neovascularization 0.4 Papillary conjunctivitis 0.5 The frequency distributions and best-fit normal curves were also plotted. The precision of grading (s.d. x 2) ranged from 0.8 to 1.4, with a mean precision of 1.0. Conclusions. Grading scales afford contact lens practitioners with a method of quantifying the severity of adverse tissue responses to contact lens wear. It is noteworthy that the statistically verified precision of grading (1.0 scale unit) concurs precisely with the essential design feature of the grading scales that each grading step of 1.0 corresponds to clinically significant difference in severity. Thus, as a general rule, a difference or change in grade of > 1.0 can be taken to be both clinically and statistically significant when using these grading scales. Trained observers are likely to achieve even greater grading precision. Supported by Hydron Limited.
Resumo:
The privacy of efficient tree-based RFID authentication protocols is heavily dependent on the branching factor on the top layer. Indefinitely increasing the branching factor, however, is not a viable option. This paper proposes the alternate-tree walking scheme as well as two protocols to circumvent this problem. The privacy of the resulting protocols is shown to be comparable to that of linear-time protocols, where there is no leakage of information, whilst reducing the computational load of the database by one-third of what is required of tree-based protocols during authentication. We also identify and address a limitation in quantifying privacy in RFID protocols.
Resumo:
Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.
Resumo:
Background: The accurate evaluation of physical activity levels amongst youth is critical for quantifying physical activity behaviors and evaluating the effect of physical activity interventions. The purpose of this review is to evaluate contemporary approaches to physical activity evaluation amongst youth. Data sources: The literature from a range of sources was reviewed and synthesized to provide an overview of contemporary approaches for measuring youth physical activity. Results: Five broad categories are described: self-report, instrumental movement detection, biological approaches, direct observation, and combined methods. Emerging technologies and priorities for future research are also identified. Conclusions: There will always be a trade-off between accuracy and available resources when choosing the best approach for measuring physical activity amongst youth. Unfortunately, cost and logistical challenges may prohibit the use of "gold standard" physical activity measurement approaches such as doubly labelled water. Other objective methods such as heart rate monitoring, accelerometry, pedometry, indirect calorimetry, or a combination of measures have the potential to better capture the duration and intensity of physical activity, while self-reported measures are useful for capturing the type and context of activity.
Resumo:
Introduction: Evidence concerning the alteration of knee function during landing suffers from a lack of consensus. This uncertainty can be attributed to methodological flaws, particularly in relation to the statistical analysis of variable human movement data. Aim: The aim of this study was to compare single-subject and group analysis in quantifying alterations in the magnitude and within-participant variability of knee mechanics during a step landing task. Methods: A group of healthy men (N = 12) stepped-down from a knee-high platform for 60 consecutive trials, each trial separated by a 1-minute rest. The magnitude and within-participant variability of sagittal knee stiffness and coordination of the landing leg during the immediate postimpact period were evaluated. Coordination of the knee was quantified in the sagittal plane by calculating the mean absolute relative phase of sagittal shank and thigh motion (MARP1) and between knee rotation and knee flexion (MARP2). Changes across trials were compared between both group and single-subject statistical analyses. Results: The group analysis detected significant reductions in MARP1 magnitude. However, the single-subject analyses detected changes in all dependent variables, which included increases in variability with task repetition. Between-individual variation was also present in the timing, size and direction of alterations to task repetition. Conclusion: The results have important implications for the interpretation of existing information regarding the adaptation of knee mechanics to interventions such as fatigue, footwear or landing height. It is proposed that a familiarisation session be incorporated in future experiments on a single-subject basis prior to an intervention.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
This study examines the influence of cancer stage, distance to treatment facilities and area disadvantage on breast and colorectal cancer spatial survival inequalities. We also estimate the number of premature deaths after adjusting for cancer stage to quantify the impact of spatial survival inequalities. Population-based descriptive study of residents aged <90 years in Queensland, Australia diagnosed with primary invasive breast (25,202 females) or colorectal (14,690 males, 11,700 females) cancers during 1996-2007. Bayesian hierarchical models explored relative survival inequalities across 478 regions. Cancer stage and disadvantage explained the spatial inequalities in breast cancer survival, however spatial inequalities in colorectal cancer survival persisted after adjustment. Of the 6,019 colorectal cancer deaths within 5 years of diagnosis, 470 (8%) were associated with spatial inequalities in non-diagnostic factors, i.e. factors beyond cancer stage at diagnosis. For breast cancers, of 2,412 deaths, 170 (7%) were related to spatial inequalities in non-diagnostic factors. Quantifying premature deaths can increase incentive for action to reduce these spatial inequalities.
Resumo:
Atmospheric nanoparticles are one of those pollutants currently unregulated through ambient air quality standards. The aim of this chapter is to assess the environmental and health impacts of atmospheric nanoparticles in European environments. The chapter begins with the conventional information on the origin of atmospheric nanoparticles, followed by their physical and chemical characteristics. A brief overview of recently published review articles on this topic is then presented to guide those readers interested in exploring any specific aspect of nanoparticles in greater detail. A further section reports a summary of recently published studies on atmospheric nanoparticles in European cities. This covers a total of about 45 sampling locations in 30 different cities within 15 European countries for quantifying levels of roadside and urban background particle number concentrations (PNCs). Average PNCs at roadside and urban background sites were found to be 3.82±3.25 ×104 cm–3 and 1.63±0.82 ×104 cm–3, respectively, giving a roadside to background PNC ratio of ~2.4. Engineered nanoparticles are one of the key emerging categories of airborne nanoparticles, especially for the indoor environments. Their ambient concentrations may increase in future due to widespread use of nanotechnology integrated products. Evaluation of their sources and probable impacts on air quality and human health are briefly discussed in the following section. Respiratory deposition doses received by the public exposed to roadside PNCs in numerous European locations are then estimated. These were found to be in the 1.17–7.56 1010 h–1 range over the studied roadside European locations. The following section discusses the potential framework for airborne nanoparticle regulations in Europe and, in addition, the existing control measures to limit nanoparticle emissions at source. The chapter finally concludes with a synthesis of the topic areas covered and highlights important areas for further work.
Resumo:
This article presents a study on the quantification of the level of occupational access and wage discrimination in Great Britain. The traditional approach to quantifying the level of sex discrimination is to distinguish gender differences in productive characteristics from the unequal treatment of characteristics according to gender. The role of inter- versus intra-occupational effects in determining the magnitude of wage differences between men and women was examined. The econometric results provided estimates of the wage differential in six broad occupational classifications together with an aggregate picture of how important the occupational distribution of females is in explaining their lower average wage. In summary, the results of the study suggest that the vast majority of the male and female wage differential arises from intra-occupation effects. The results provide evidence to suggest that occupational segregation is not a major contributor to the observed male/female wage differential.
Resumo:
Quantifying spatial and/or temporal trends in environmental modelling data requires that measurements be taken at multiple sites. The number of sites and duration of measurement at each site must be balanced against costs of equipment and availability of trained staff. The split panel design comprises short measurement campaigns at multiple locations and continuous monitoring at reference sites [2]. Here we present a modelling approach for a spatio-temporal model of ultrafine particle number concentration (PNC) recorded according to a split panel design. The model describes the temporal trends and background levels at each site. The data were measured as part of the “Ultrafine Particles from Transport Emissions and Child Health” (UPTECH) project which aims to link air quality measurements, child health outcomes and a questionnaire on the child’s history and demographics. The UPTECH project involves measuring aerosol and particle counts and local meteorology at each of 25 primary schools for two weeks and at three long term monitoring stations, and health outcomes for a cohort of students at each school [3].
Resumo:
Purpose Arbitrary numbers of corneal confocal microscopy images have been used for analysis of corneal subbasal nerve parameters under the implicit assumption that these are a representative sample of the central corneal nerve plexus. The purpose of this study is to present a technique for quantifying the number of random central corneal images required to achieve an acceptable level of accuracy in the measurement of corneal nerve fiber length and branch density. Methods Every possible combination of 2 to 16 images (where 16 was deemed the true mean) of the central corneal subbasal nerve plexus, not overlapping by more than 20%, were assessed for nerve fiber length and branch density in 20 subjects with type 2 diabetes and varying degrees of functional nerve deficit. Mean ratios were calculated to allow comparisons between and within subjects. Results In assessing nerve branch density, eight randomly chosen images not overlapping by more than 20% produced an average that was within 30% of the true mean 95% of the time. A similar sampling strategy of five images was 13% within the true mean 80% of the time for corneal nerve fiber length. Conclusions The “sample combination analysis” presented here can be used to determine the sample size required for a desired level of accuracy of quantification of corneal subbasal nerve parameters. This technique may have applications in other biological sampling studies.
Resumo:
We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.