905 resultados para Methods : Statistical
Resumo:
Background: Known risk factors for secondary lymphedema only partially explain who develops lymphedema following cancer, suggesting that inherited genetic susceptibility may influence risk. Moreover, identification of molecular signatures could facilitate lymphedema risk prediction prior to surgery or lead to effective drug therapies for prevention or treatment. Recent advances in the molecular biology underlying development of the lymphatic system and related congenital disorders implicate a number of potential candidate genes to explore in relation to secondary lymphedema. Methods and Results: We undertook a nested case-control study, with participants who had developed lymphedema after surgical intervention within the first 18 months of their breast cancer diagnosis serving as cases (n=22) and those without lymphedema serving as controls (n=98), identified from a prospective, population-based, cohort study in Queensland, Australia. TagSNPs that covered all known genetic variation in the genes SOX18, VEGFC, VEGFD, VEGFR2, VEGFR3, RORC, FOXC2, LYVE1, ADM and PROX1 were selected for genotyping. Multiple SNPs within three receptor genes, VEGFR2, VEGFR3 and RORC, were associated with lymphedema defined by statistical significance (p<0.05) or extreme risk estimates (OR<0.5 or >2.0). Conclusions: These provocative, albeit preliminary, findings regarding possible genetic predisposition to secondary lymphedema following breast cancer treatment warrant further attention for potential replication using larger datasets.
Resumo:
Introduction: Feeding on demand supports an infant’s innate capacity to respond to hunger and satiety cues and may promote later self-regulation of intake. Our aim was to examine whether feeding style (on demand vs to schedule) is associated with weight gain in early life. Methods: Participants were first-time mothers of healthy term infants enrolled NOURISH, an RCT evaluating an intervention to promote positive early feeding practices. Baseline assessment occurred when infants were aged 2-7 months. Infants able to be categorised clearly as feeding on demand or to schedule (mothers self report) were included in the logistic regression analysis. The model was adjusted for gender, breastfeeding and maternal age, education, BMI. Weight gain was defined as a positive difference in baseline minus birthweight z-scores (WHO standards) which indicated tracking above weight percentile. Results: Data from 356 infants with a mean age of 4.4 (SD 1.0) months were available. Of these, 197 (55%) were fed on demand, 42 (12%) were fed on schedule. There was no statistical association between feeding style and weight gain [OR=0.72 (95%CI 0.35-1.46), P=0.36]. Formula fed infants were three times more likely to be fed on schedule and formula feeding was independently associated with increased weight gain [OR=2.02 (95%CI 1.11-3.66), P=0.021]. Conclusion: In this preliminary analysis the association between feeding style and weight gain did not reach statistical significance, however , the effect size may be clinically relevant and future analysis will include the full study sample (N=698).
Resumo:
In this paper, spatially offset Raman spectroscopy (SORS) is demonstrated for non-invasively investigating the composition of drug mixtures inside an opaque plastic container. The mixtures consisted of three components including a target drug (acetaminophen or phenylephrine hydrochloride) and two diluents (glucose and caffeine). The target drug concentrations ranged from 5% to 100%. After conducting SORS analysis to ascertain the Raman spectra of the concealed mixtures, principal component analysis (PCA) was performed on the SORS spectra to reveal trends within the data. Partial least squares (PLS) regression was used to construct models that predicted the concentration of each target drug, in the presence of the other two diluents. The PLS models were able to predict the concentration of acetaminophen in the validation samples with a root-mean-square error of prediction (RMSEP) of 3.8% and the concentration of phenylephrine hydrochloride with an RMSEP of 4.6%. This work demonstrates the potential of SORS, used in conjunction with multivariate statistical techniques, to perform non-invasive, quantitative analysis on mixtures inside opaque containers. This has applications for pharmaceutical analysis, such as monitoring the degradation of pharmaceutical products on the shelf, in forensic investigations of counterfeit drugs, and for the analysis of illicit drug mixtures which may contain multiple components.
Resumo:
Under pressure from both the ever increasing level of market competition and the global financial crisis, clients in consumer electronics (CE) industry are keen to understand how to choose the most appropriate procurement method and hence to improve their competitiveness. Four rounds of Delphi questionnaire survey were conducted with 12 experts in order to identify the most appropriate procurement method in the Hong Kong CE industry. Five key selection criteria in the CE industry are highlighted, including product quality, capability, price competition, flexibility and speed. This study also revealed that product quality was found to be the most important criteria for the “First type used commercially” and “Major functional improvements” projects. As for “Minor functional improvements” projects, price competition was the most crucial factor to be considered during the PP selection. These research findings provide owners with useful insights to select the procurement strategies.
Resumo:
Research Interests: Are parents complying with the legislation? Is this the same for urban, regional and rural parents? Indigenous parents? What difficulties do parents experience in complying? Do parents understand why the legislation was put in place? Have there been negative consequences for other organisations or sectors of the community?
Resumo:
Recent studies have started to explore context-awareness as a driver in the design of adaptable business processes. The emerging challenge of identifying and considering contextual drivers in the environment of a business process are well understood, however, typical methods used in business process modeling do not yet consider this additional contextual information in their process designs. In this chapter, we describe our research towards innovative and advanced process modeling methods that include mechanisms to incorporate relevant contextual drivers and their impacts on business processes in process design models. We report on our ongoing work with an Australian insurance provider and describe the design science we employed to develop these innovative and useful artifacts as part of a context-aware method framework. We discuss the utility of these artifacts in an application in the claims handling process at the case organization.
Resumo:
This paper reviews the current state in the application of infrared methods, particularly mid-infrared (mid-IR) and near infrared (NIR), for the evaluation of the structural and functional integrity of articular cartilage. It is noted that while a considerable amount of research has been conducted with respect to tissue characterization using mid-IR, it is almost certain that full-thickness cartilage assessment is not feasible with this method. On the contrary, the relatively more considerable penetration capacity of NIR suggests that it is a suitable candidate for full-thickness cartilage evaluation. Nevertheless, significant research is still required to improve the specificity and clinical applicability of the method if we are going to be able to use it for distinguishing between functional and dysfunctional cartilage.
Resumo:
Purpose: To compare accuracies of different methods for calculating human lens power when lens thickness is not available. Methods: Lens power was calculated by four methods. Three methods were used with previously published biometry and refraction data of 184 emmetropic and myopic eyes of 184 subjects (age range [18, 63] years, spherical equivalent range [–12.38, +0.75] D). These three methods consist of the Bennett method, which uses lens thickness, our modification of the Stenström method and the Bennett¬Rabbetts method, both of which do not require knowledge of lens thickness. These methods include c constants, which represent distances from lens surfaces to principal planes. Lens powers calculated with these methods were compared with those calculated using phakometry data available for a subgroup of 66 emmetropic eyes (66 subjects). Results: Lens powers obtained from the Bennett method corresponded well with those obtained by phakometry for emmetropic eyes, although individual differences up to 3.5D occurred. Lens powers obtained from the modified¬Stenström and Bennett¬Rabbetts methods deviated significantly from those obtained with either the Bennett method or phakometry. Customizing the c constants improved this agreement, but applying these constants to the entire group gave mean lens power differences of 0.71 ± 0.56D compared with the Bennett method. By further optimizing the c constants, the agreement with the Bennett method was within ± 1D for 95% of the eyes. Conclusion: With appropriate constants, the modified¬Stenström and Bennett¬Rabbetts methods provide a good approximation of the Bennett lens power in emmetropic and myopic eyes.
Resumo:
Here we present a sequential Monte Carlo (SMC) algorithm that can be used for any one-at-a-time Bayesian sequential design problem in the presence of model uncertainty where discrete data are encountered. Our focus is on adaptive design for model discrimination but the methodology is applicable if one has a different design objective such as parameter estimation or prediction. An SMC algorithm is run in parallel for each model and the algorithm relies on a convenient estimator of the evidence of each model which is essentially a function of importance sampling weights. Other methods for this task such as quadrature, often used in design, suffer from the curse of dimensionality. Approximating posterior model probabilities in this way allows us to use model discrimination utility functions derived from information theory that were previously difficult to compute except for conjugate models. A major benefit of the algorithm is that it requires very little problem specific tuning. We demonstrate the methodology on three applications, including discriminating between models for decline in motor neuron numbers in patients suffering from neurological diseases such as Motor Neuron disease.
Resumo:
Concerns regarding groundwater contamination with nitrate and the long-term sustainability of groundwater resources have prompted the development of a multi-layered three dimensional (3D) geological model to characterise the aquifer geometry of the Wairau Plain, Marlborough District, New Zealand. The 3D geological model which consists of eight litho-stratigraphic units has been subsequently used to synthesise hydrogeological and hydrogeochemical data for different aquifers in an approach that aims to demonstrate how integration of water chemistry data within the physical framework of a 3D geological model can help to better understand and conceptualise groundwater systems in complex geological settings. Multivariate statistical techniques(e.g. Principal Component Analysis and Hierarchical Cluster Analysis) were applied to groundwater chemistry data to identify hydrochemical facies which are characteristic of distinct evolutionary pathways and a common hydrologic history of groundwaters. Principal Component Analysis on hydrochemical data demonstrated that natural water-rock interactions, redox potential and human agricultural impact are the key controls of groundwater quality in the Wairau Plain. Hierarchical Cluster Analysis revealed distinct hydrochemical water quality groups in the Wairau Plain groundwater system. Visualisation of the results of the multivariate statistical analyses and distribution of groundwater nitrate concentrations in the context of aquifer lithology highlighted the link between groundwater chemistry and the lithology of host aquifers. The methodology followed in this study can be applied in a variety of hydrogeological settings to synthesise geological, hydrogeological and hydrochemical data and present them in a format readily understood by a wide range of stakeholders. This enables a more efficient communication of the results of scientific studies to the wider community.
Resumo:
The National Hand Hygiene Initiative, implemented in Australia in 2009, is currently being evaluated for effectiveness and cost-effectiveness by a multidisciplinary team of researchers. Data from a wide range of sources are being harvested to address the research questions. The data are observational and appropriate statistical and economic modelling methods are being used. Decision makers will be provided with new knowledge about how hand hygiene interventions should be organised and what investment decisions are justified. This is novel research and the authors are unaware of any other evaluation of hand hygiene improvement initiatives. This paper describes the evaluation currently underway.
Resumo:
During the course of several natural disasters in recent years, Twitter has been found to play an important role as an additional medium for many–to–many crisis communication. Emergency services are successfully using Twitter to inform the public about current developments, and are increasingly also attempting to source first–hand situational information from Twitter feeds (such as relevant hashtags). The further study of the uses of Twitter during natural disasters relies on the development of flexible and reliable research infrastructure for tracking and analysing Twitter feeds at scale and in close to real time, however. This article outlines two approaches to the development of such infrastructure: one which builds on the readily available open source platform yourTwapperkeeper to provide a low–cost, simple, and basic solution; and, one which establishes a more powerful and flexible framework by drawing on highly scaleable, state–of–the–art technology.
Resumo:
Background Although risk of human papillomavirus (HPV)–associated cancers of the anus, cervix, oropharynx, penis, vagina, and vulva is increased among persons with AIDS, the etiologic role of immunosuppression is unclear and incidence trends for these cancers over time, particularly after the introduction of highly active antiretroviral therapy in 1996, are not well described. Methods Data on 499 230 individuals diagnosed with AIDS from January 1, 1980, through December 31, 2004, were linked with cancer registries in 15 US regions. Risk of in situ and invasive HPV-associated cancers, compared with that in the general population, was measured by use of standardized incidence ratios (SIRs) and 95% confidence intervals (CIs). We evaluated the relationship of immunosuppression with incidence during the period of 4–60 months after AIDS onset by use of CD4 T-cell counts measured at AIDS onset. Incidence during the 4–60 months after AIDS onset was compared across three periods (1980–1989, 1990–1995, and 1996–2004). All statistical tests were two-sided. Results Among persons with AIDS, we observed statistically significantly elevated risk of all HPV-associated in situ (SIRs ranged from 8.9, 95% CI = 8.0 to 9.9, for cervical cancer to 68.6, 95% CI = 59.7 to 78.4, for anal cancer among men) and invasive (SIRs ranged from 1.6, 95% CI = 1.2 to 2.1, for oropharyngeal cancer to 34.6, 95% CI = 30.8 to 38.8, for anal cancer among men) cancers. During 1996–2004, low CD4 T-cell count was associated with statistically significantly increased risk of invasive anal cancer among men (relative risk [RR] per decline of 100 CD4 T cells per cubic millimeter = 1.34, 95% CI = 1.08 to 1.66, P = .006) and non–statistically significantly increased risk of in situ vagina or vulva cancer (RR = 1.52, 95% CI = 0.99 to 2.35, P = .055) and of invasive cervical cancer (RR = 1.32, 95% CI = 0.96 to 1.80, P = .077). Among men, incidence (per 100 000 person-years) of in situ and invasive anal cancer was statistically significantly higher during 1996–2004 than during 1990–1995 (61% increase for in situ cancers, 18.3 cases vs 29.5 cases, respectively; RR = 1.71, 95% CI = 1.24 to 2.35, P < .001; and 104% increase for invasive cancers, 20.7 cases vs 42.3 cases, respectively; RR = 2.03, 95% CI = 1.54 to 2.68, P < .001). Incidence of other cancers was stable over time. Conclusions Risk of HPV-associated cancers was elevated among persons with AIDS and increased with increasing immunosuppression. The increasing incidence for anal cancer during 1996–2004 indicates that prolonged survival may be associated with increased risk of certain HPV-associated cancers.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
A wireless sensor network system must have the ability to tolerate harsh environmental conditions and reduce communication failures. In a typical outdoor situation, the presence of wind can introduce movement in the foliage. This motion of vegetation structures causes large and rapid signal fading in the communication link and must be accounted for when deploying a wireless sensor network system in such conditions. This thesis examines the fading characteristics experienced by wireless sensor nodes due to the effect of varying wind speed in a foliage obstructed transmission path. It presents extensive measurement campaigns at two locations with the approach of a typical wireless sensor networks configuration. The significance of this research lies in the varied approaches of its different experiments, involving a variety of vegetation types, scenarios and the use of different polarisations (vertical and horizontal). Non–line of sight (NLoS) scenario conditions investigate the wind effect based on different vegetation densities including that of the Acacia tree, Dogbane tree and tall grass. Whereas the line of sight (LoS) scenario investigates the effect of wind when the grass is swaying and affecting the ground-reflected component of the signal. Vegetation type and scenarios are envisaged to simulate real life working conditions of wireless sensor network systems in outdoor foliated environments. The results from the measurements are presented in statistical models involving first and second order statistics. We found that in most of the cases, the fading amplitude could be approximated by both Lognormal and Nakagami distribution, whose m parameter was found to depend on received power fluctuations. Lognormal distribution is known as the result of slow fading characteristics due to shadowing. This study concludes that fading caused by variations in received power due to wind in wireless sensor networks systems are found to be insignificant. There is no notable difference in Nakagami m values for low, calm, and windy wind speed categories. It is also shown in the second order analysis, the duration of the deep fades are very short, 0.1 second for 10 dB attenuation below RMS level for vertical polarization and 0.01 second for 10 dB attenuation below RMS level for horizontal polarization. Another key finding is that the received signal strength for horizontal polarisation demonstrates more than 3 dB better performances than the vertical polarisation for LoS and near LoS (thin vegetation) conditions and up to 10 dB better for denser vegetation conditions.