930 resultados para Bivariate Competing Risks Data
Resumo:
Barmah Forest virus (BFV) disease is one of the most widespread mosquito-borne diseases in Australia. The number of outbreaks and the incidence rate of BFV in Australia have attracted growing concerns about the spatio-temporal complexity and underlying risk factors of BFV disease. A large number of notifications has been recorded continuously in Queensland since 1992. Yet, little is known about the spatial and temporal characteristics of the disease. I aim to use notification data to better understand the effects of climatic, demographic, socio-economic and ecological risk factors on the spatial epidemiology of BFV disease transmission, develop predictive risk models and forecast future disease risks under climate change scenarios. Computerised data files of daily notifications of BFV disease and climatic variables in Queensland during 1992-2008 were obtained from Queensland Health and Australian Bureau of Meteorology, respectively. Projections on climate data for years 2025, 2050 and 2100 were obtained from Council of Scientific Industrial Research Organisation. Data on socio-economic, demographic and ecological factors were also obtained from relevant government departments as follows: 1) socio-economic and demographic data from Australian Bureau of Statistics; 2) wetlands data from Department of Environment and Resource Management and 3) tidal readings from Queensland Department of Transport and Main roads. Disease notifications were geocoded and spatial and temporal patterns of disease were investigated using geostatistics. Visualisation of BFV disease incidence rates through mapping reveals the presence of substantial spatio-temporal variation at statistical local areas (SLA) over time. Results reveal high incidence rates of BFV disease along coastal areas compared to the whole area of Queensland. A Mantel-Haenszel Chi-square analysis for trend reveals a statistically significant relationship between BFV disease incidence rates and age groups (ƒÓ2 = 7587, p<0.01). Semi-variogram analysis and smoothed maps created from interpolation techniques indicate that the pattern of spatial autocorrelation was not homogeneous across the state. A cluster analysis was used to detect the hot spots/clusters of BFV disease at a SLA level. Most likely spatial and space-time clusters are detected at the same locations across coastal Queensland (p<0.05). The study demonstrates heterogeneity of disease risk at a SLA level and reveals the spatial and temporal clustering of BFV disease in Queensland. Discriminant analysis was employed to establish a link between wetland classes, climate zones and BFV disease. This is because the importance of wetlands in the transmission of BFV disease remains unclear. The multivariable discriminant modelling analyses demonstrate that wetland types of saline 1, riverine and saline tidal influence were the most significant risk factors for BFV disease in all climate and buffer zones, while lacustrine, palustrine, estuarine and saline 2 and saline 3 wetlands were less important. The model accuracies were 76%, 98% and 100% for BFV risk in subtropical, tropical and temperate climate zones, respectively. This study demonstrates that BFV disease risk varied with wetland class and climate zone. The study suggests that wetlands may act as potential breeding habitats for BFV vectors. Multivariable spatial regression models were applied to assess the impact of spatial climatic, socio-economic and tidal factors on the BFV disease in Queensland. Spatial regression models were developed to account for spatial effects. Spatial regression models generated superior estimates over a traditional regression model. In the spatial regression models, BFV disease incidence shows an inverse relationship with minimum temperature, low tide and distance to coast, and positive relationship with rainfall in coastal areas whereas in whole Queensland the disease shows an inverse relationship with minimum temperature and high tide and positive relationship with rainfall. This study determines the most significant spatial risk factors for BFV disease across Queensland. Empirical models were developed to forecast the future risk of BFV disease outbreaks in coastal Queensland using existing climatic, socio-economic and tidal conditions under climate change scenarios. Logistic regression models were developed using BFV disease outbreak data for the existing period (2000-2008). The most parsimonious model had high sensitivity, specificity and accuracy and this model was used to estimate and forecast BFV disease outbreaks for years 2025, 2050 and 2100 under climate change scenarios for Australia. Important contributions arising from this research are that: (i) it is innovative to identify high-risk coastal areas by creating buffers based on grid-centroid and the use of fine-grained spatial units, i.e., mesh blocks; (ii) a spatial regression method was used to account for spatial dependence and heterogeneity of data in the study area; (iii) it determined a range of potential spatial risk factors for BFV disease; and (iv) it predicted the future risk of BFV disease outbreaks under climate change scenarios in Queensland, Australia. In conclusion, the thesis demonstrates that the distribution of BFV disease exhibits a distinct spatial and temporal variation. Such variation is influenced by a range of spatial risk factors including climatic, demographic, socio-economic, ecological and tidal variables. The thesis demonstrates that spatial regression method can be applied to better understand the transmission dynamics of BFV disease and its risk factors. The research findings show that disease notification data can be integrated with multi-factorial risk factor data to develop build-up models and forecast future potential disease risks under climate change scenarios. This thesis may have implications in BFV disease control and prevention programs in Queensland.
Resumo:
Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.
Resumo:
The selection of appropriate analogue materials is a central consideration in the design of realistic physical models. We investigate the rheology of highly-filled silicone polymers in order to find materials with a power-law strain-rate softening rheology suitable for modelling rock deformation by dislocation creep and report the rheological properties of the materials as functions of the filler content. The mixtures exhibit strain-rate softening behaviour but with increasing amounts of filler become strain-dependent. For the strain-independent viscous materials, flow laws are presented while for strain-dependent materials the relative importance of strain and strain rate softening/hardening is reported. If the stress or strain rate is above a threshold value some highly-filled silicone polymers may be considered linear visco-elastic (strain independent) and power-law strain-rate softening. The power-law exponent can be raised from 1 to ~3 by using mixtures of high-viscosity silicone and plasticine. However, the need for high shear strain rates to obtain the power-law rheology imposes some restrictions on the usage of such materials for geodynamic modelling. Two simple shear experiments are presented that use Newtonian and power-law strain-rate softening materials. The results demonstrate how materials with power-law rheology result in better strain localization in analogue experiments.
Resumo:
Information privacy requirements of patients and information requirements of healthcare providers (HCP) are competing concerns. Reaching a balance between these requirements have proven difficult but is crucial for the success of eHealth systems. The traditional approaches to information management have been preventive measures which either allow or deny access to information. We believe that this approach is inappropriate for a domain such as healthcare. We contend that introducing information accountability (IA) to eHealth systems can reach the aforementioned balance without the need for rigid information control. IA is a fairly new concept to computer science, hence; there are no unambiguously accepted principles as yet. But the concept delivers promising advantages to information management in a robust manner. Accountable-eHealth (AeH) systems are eHealth systems which use IA principles as the measure for privacy and information management. AeH systems face three main impediments; technological, social and ethical and legal. In this paper, we present the AeH model and focus on the legal aspects of AeH systems in Australia. We investigate current legislation available in Australia regarding health information management and identify future legal requirements if AeH systems are to be implemented in Australia.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
In the elderly, the risks for protein-energy malnutrition from older age, dementia, depression and living alone have been well-documented. Other risk factors including anorexia, gastrointestinal dysfunction, loss of olfactory and taste senses and early satiety have also been suggested to contribute to poor nutritional status. In Parkinson’s disease (PD), it has been suggested that the disease symptoms may predispose people with PD to malnutrition. However, the risks for malnutrition in this population are not well-understood. The current study’s aim was to determine malnutrition risk factors in community-dwelling adults with PD. Nutritional status was assessed using the Patient-Generated Subjective Global Assessment (PG-SGA). Data about age, time since diagnosis, medications and living situation were collected. Levodopa equivalent doses (LDED) and LDED per kg body weight (mg/kg) were calculated. Depression and anxiety were measured using the Beck’s Depression Inventory (BDI) and Spielberger Trait Anxiety questionnaire, respectively. Cognitive function was assessed using the Addenbrooke’s Cognitive Examination (ACE-R). Non-motor symptoms were assessed using the Scales for Outcomes in Parkinson's disease-Autonomic (SCOPA-AUT) and Modified Constipation Assessment Scale (MCAS). A total of 125 community-dwelling people with PD were included, average age of 70.2±9.3(35-92) years and average time since diagnosis of 7.3±5.9(0–31) years. Average body mass index (BMI) was 26.0±5.5kg/m2. Of these, 15% (n=19) were malnourished (SGA-B). Multivariate logistic regression analysis revealed that older age (OR=1.16, CI=1.02-1.31), more depressive symptoms (OR=1.26, CI=1.07-1.48), lower levels of anxiety (OR=.90, CI=.82-.99), and higher LDED per kg body weight (OR=1.57, CI=1.14-2.15) significantly increased malnutrition risk. Cognitive function, living situation, number of prescription medications, LDED, years since diagnosis and the severity of non-motor symptoms did not significantly influence malnutrition risk. Malnutrition results in poorer health outcomes. Proactively addressing the risk factors can help prevent declines in nutritional status. In the current study, older people with PD with depression and greater amounts of levodopa per body weight were at increased malnutrition risk.
Resumo:
Mandatory data breach notification laws are a novel statutory solution in relation to organizational protections of personal information. They require organizations which have suffered a breach of security involving personal information to notif'y those persons whose information may have been affected. These laws originated in the state based legislatures of the United States during the last decade and have subsequently garnered worldwide legislative interest. Despite their perceived utility, mandatory data breach notification laws have several conceptual and practical concems that limit the scope of their applicability, particularly in relation to existing information privacy law regimes. We outline these concerns, and in doing so, we contend that while mandatory data breach notification laws have many useful facets, their utility as an 'add-on' to enhance the failings of current information privacy law frameworks should not necessarily be taken for granted.
Resumo:
longitudinal study of data modelling across grades 1-3. The activity engaged children in designing, implementing, and analysing a survey about their new playground. Data modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. The core components of data modelling addressed here are children’s structuring and representing of data, with a focus on their display of metarepresentational competence (diSessa, 2004). Such competence includes students’ abilities to invent or design a variety of new representations, explain their creations, understand the role they play, and critique and compare the adequacy of representations. Reported here are the ways in which the children structured and represented their data, the metarepresentational competence displayed, and links between their metarepresentational competence and conceptual competence.
Resumo:
BACKGROUND: There is evidence that children's decisions to smoke are influenced by family and friends. OBJECTIVES: To assess the effectiveness of interventions to help family members to strengthen non-smoking attitudes and promote non-smoking by children and other family members. SEARCH STRATEGY: We searched 14 electronic bibliographic databases, including the Cochrane Tobacco Addiction Group specialized register, MEDLINE, EMBASE, PsycINFO and CINAHL. We also searched unpublished material, and the reference lists of key articles. We performed both free-text Internet searches and targeted searches of appropriate websites, and we hand-searched key journals not available electronically. We also consulted authors and experts in the field. The most recent search was performed in July 2006. SELECTION CRITERIA: Randomized controlled trials (RCTs) of interventions with children (aged 5-12) or adolescents (aged 13-18) and family members to deter the use of tobacco. The primary outcome was the effect of the intervention on the smoking status of children who reported no use of tobacco at baseline. Included trials had to report outcomes measured at least six months from the start of the intervention. DATA COLLECTION AND ANALYSIS: We reviewed all potentially relevant citations and retrieved the full text to determine whether the study was an RCT and matched our inclusion criteria. Two authors independently extracted study data and assessed them for methodological quality. The studies were too limited in number and quality to undertake a formal meta-analysis, and we present a narrative synthesis. MAIN RESULTS: We identified 19 RCTs of family interventions to prevent smoking. We identified five RCTs in Category 1 (minimal risk of bias on all counts); nine in Category 2 (a risk of bias in one or more areas); and five in Category 3 (risks of bias in design and execution such that reliable conclusions cannot be drawn from the study).Considering the fourteen Category 1 and 2 studies together: (1) four of the nine that tested a family intervention against a control group had significant positive effects, but one showed significant negative effects; (2) one of the five RCTs that tested a family intervention against a school intervention had significant positive effects; (3) none of the six that compared the incremental effects of a family plus a school programme to a school programme alone had significant positive effects; (4) the one RCT that tested a family tobacco intervention against a family non-tobacco safety intervention showed no effects; and (5) the one trial that used general risk reduction interventions found the group which received the parent and teen interventions had less smoking than the one that received only the teen intervention (there was no tobacco intervention but tobacco outcomes were measured). For the included trials the amount of implementer training and the fidelity of implementation are related to positive outcomes, but the number of sessions is not. AUTHORS' CONCLUSIONS: Some well-executed RCTs show family interventions may prevent adolescent smoking, but RCTs which were less well executed had mostly neutral or negative results. There is thus a need for well-designed and executed RCTs in this area.
Resumo:
Background Cancer outlier profile analysis (COPA) has proven to be an effective approach to analyzing cancer expression data, leading to the discovery of the TMPRSS2 and ETS family gene fusion events in prostate cancer. However, the original COPA algorithm did not identify down-regulated outliers, and the currently available R package implementing the method is similarly restricted to the analysis of over-expressed outliers. Here we present a modified outlier detection method, mCOPA, which contains refinements to the outlier-detection algorithm, identifies both over- and under-expressed outliers, is freely available, and can be applied to any expression dataset. Results We compare our method to other feature-selection approaches, and demonstrate that mCOPA frequently selects more-informative features than do differential expression or variance-based feature selection approaches, and is able to recover observed clinical subtypes more consistently. We demonstrate the application of mCOPA to prostate cancer expression data, and explore the use of outliers in clustering, pathway analysis, and the identification of tumour suppressors. We analyse the under-expressed outliers to identify known and novel prostate cancer tumour suppressor genes, validating these against data in Oncomine and the Cancer Gene Index. We also demonstrate how a combination of outlier analysis and pathway analysis can identify molecular mechanisms disrupted in individual tumours. Conclusions We demonstrate that mCOPA offers advantages, compared to differential expression or variance, in selecting outlier features, and that the features so selected are better able to assign samples to clinically annotated subtypes. Further, we show that the biology explored by outlier analysis differs from that uncovered in differential expression or variance analysis. mCOPA is an important new tool for the exploration of cancer datasets and the discovery of new cancer subtypes, and can be combined with pathway and functional analysis approaches to discover mechanisms underpinning heterogeneity in cancers
Resumo:
This paper proposes a technique that supports process participants in making risk-informed decisions, with the aim to reduce the process risks. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we prompt the participant with the expected risk that a given fault will occur given the particular input. These risks are predicted by traversing decision trees generated from the logs of past process executions and considering process data, involved resources, task durations and contextual information like task frequencies. The approach has been implemented in the YAWL system and its effectiveness evaluated. The results show that the process instances executed in the tests complete with substantially fewer faults and with lower fault severities, when taking into account the recommendations provided by our technique.
Resumo:
Carbon credit markets are in the early stages of development and media headlines such as these illustrate emerging levels of concern and foreboding over the potential for fraudulent crime within these markets. Australian companies are continuing to venture into the largely unregulated voluntary carbon credit market to offset their emissions and / or give their customers the opportunity to be ‘carbon neutral’. Accordingly, the voluntary market has seen a proliferation of carbon brokers that offer tailored offset carbon products according to need and taste. With the instigation of the Australian compliance market and with pressure increasing for political responses to combat climate change, we would expect Australian companies to experience greater exposure to carbon products in both compliance and voluntary markets. This paper examines the risks of carbon fraud in these markets by reviewing cases of actual fraud and analysing and identifying contexts where risks of carbon fraud are most likely.
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.