115 resultados para RANDOM REGULAR GRAPHS
Resumo:
Background Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in cancer survival. Multilevel models assume independent geographical areas, whereas spatial models explicitly incorporate geographical correlation, often via a conditional autoregressive prior. However the relative merits of these methods for large population-based studies have not been explored. Using a case-study approach, we report on the implications of using multilevel and spatial survival models to study geographical inequalities in all-cause survival. Methods Multilevel discrete-time and Bayesian spatial survival models were used to study geographical inequalities in all-cause survival for a population-based colorectal cancer cohort of 22,727 cases aged 20–84 years diagnosed during 1997–2007 from Queensland, Australia. Results Both approaches were viable on this large dataset, and produced similar estimates of the fixed effects. After adding area-level covariates, the between-area variability in survival using multilevel discrete-time models was no longer significant. Spatial inequalities in survival were also markedly reduced after adjusting for aggregated area-level covariates. Only the multilevel approach however, provided an estimation of the contribution of geographical variation to the total variation in survival between individual patients. Conclusions With little difference observed between the two approaches in the estimation of fixed effects, multilevel models should be favored if there is a clear hierarchical data structure and measuring the independent impact of individual- and area-level effects on survival differences is of primary interest. Bayesian spatial analyses may be preferred if spatial correlation between areas is important and if the priority is to assess small-area variations in survival and map spatial patterns. Both approaches can be readily fitted to geographically enabled survival data from international settings
Resumo:
In continuum one-dimensional space, a coupled directed continuous time random walk model is proposed, where the random walker jumps toward one direction and the waiting time between jumps affects the subsequent jump. In the proposed model, the Laplace-Laplace transform of the probability density function P(x,t) of finding the walker at position at time is completely determined by the Laplace transform of the probability density function φ(t) of the waiting time. In terms of the probability density function of the waiting time in the Laplace domain, the limit distribution of the random process and the corresponding evolving equations are derived.
Resumo:
The growing interest in co-created reading experiences in both digital and print formats raises interesting questions for creative writers who work in the space of interactive fiction. This essay argues that writers have not abandoned experiments with co-creation in print narratives in favour of the attractions of the digital environment, as might be assumed by the discourse on digital development. Rather, interactive print narratives, in particular ‘reader-assembled narratives’ demonstrate a rich history of experimentation and continue to engage writers who wish to craft individual reading experiences for readers and to experiment with their own creative process as writers. The reader-assembled narrative has been used for many different reasons and for some writers, such as BS Johnson it is a method of problem solving, for others, like Robert Coover, it is a way to engage the reader in a more playful sense. Authors such as Marc Saporta, BS Johnson, and Robert Coover have engaged with this type of narrative play. This examination considers the narrative experimentation of these authors as a way of offering insights into creative practice for contemporary creative writers.
Resumo:
Submarine groundwater discharge (SGD) is an integral part of the hydrological cycle and represents an important aspect of land-ocean interactions. We used a numerical model to simulate flow and salt transport in a nearshore groundwater aquifer under varying wave conditions based on yearlong random wave data sets, including storm surge events. The results showed significant flow asymmetry with rapid response of influxes and retarded response of effluxes across the seabed to the irregular wave conditions. While a storm surge immediately intensified seawater influx to the aquifer, the subsequent return of intruded seawater to the sea, as part of an increased SGD, was gradual. Using functional data analysis, we revealed and quantified retarded, cumulative effects of past wave conditions on SGD including the fresh groundwater and recirculating seawater discharge components. The retardation was characterized well by a gamma distribution function regardless of wave conditions. The relationships between discharge rates and wave parameters were quantifiable by a regression model in a functional form independent of the actual irregular wave conditions. This statistical model provides a useful method for analyzing and predicting SGD from nearshore unconfined aquifers affected by random waves
Resumo:
Random walk models are often used to interpret experimental observations of the motion of biological cells and molecules. A key aim in applying a random walk model to mimic an in vitro experiment is to estimate the Fickian diffusivity (or Fickian diffusion coefficient),D. However, many in vivo experiments are complicated by the fact that the motion of cells and molecules is hindered by the presence of obstacles. Crowded transport processes have been modeled using repeated stochastic simulations in which a motile agent undergoes a random walk on a lattice that is populated by immobile obstacles. Early studies considered the most straightforward case in which the motile agent and the obstacles are the same size. More recent studies considered stochastic random walk simulations describing the motion of an agent through an environment populated by obstacles of different shapes and sizes. Here, we build on previous simulation studies by analyzing a general class of lattice-based random walk models with agents and obstacles of various shapes and sizes. Our analysis provides exact calculations of the Fickian diffusivity, allowing us to draw conclusions about the role of the size, shape and density of the obstacles, as well as examining the role of the size and shape of the motile agent. Since our analysis is exact, we calculateDdirectly without the need for random walk simulations. In summary, we find that the shape, size and density of obstacles has a major influence on the exact Fickian diffusivity. Furthermore, our results indicate that the difference in diffusivity for symmetric and asymmetric obstacles is significant.
Resumo:
• In December 1986 funds were approved to double the intensity of random breath testing (RBT) and provide publicity support for police efforts. These changes were considered necessary to make RBT effective. • RBT methods were changed in the metropolitan area to enable block testing (pulling over a block of traffic rather than one or two cars), deployment of police to cut off escape routes, and testing by traffic patrols in all police subdivisions. Additional operators were trained for country RBT. • A publicity campaign was developed, aimed mainly at male drivers aged 18-50. The campaign consisted of the “cardsharp” television commercials, radio commercials, newspaper articles, posters and pamphlets. • Increased testing and the publicity campaigns were launched on 10 April 1987. • Police tests increased by 92.5% in May – December 1987, compared with the same period in the previous four years. • The detection rate for drinking drivers picked up by police who were cutting off escape routes was comparatively high, indicating that drivers were attempting to avoid RBT, and that this police method was effective at detecting these drivers. • A telephone survey indicated that drivers were aware of the messages of the publicity campaign. • The telephone survey also indicated that the target group had been exposed to high levels of RBT, as planned, and that fear of apprehension was the major factor deterring them from drink driving. • A roadside survey of driver blood alcohol concentrations (BACs) by the University of Adelaide’s Road Accident Research Unit (RARU) showed that, between 10p.m. and 3a.m., the proportion of drivers in Adelaide with a BAC greater than or equal to 0/08 decreased by 42%. • Drivers under 21 were identified as a possible problem area. • Fatalities in the twelve month period commencing May 1987 decreased by 18% in comparison with the previous twelve month period, and by 13% in comparison with the average of the previous two twelve month periods (commencing May 1985 and May 1986). There are indications that this trend is continuing. • It is concluded that the increase in RBT, plus publicity, was successful in achieving its aims of reductions in drink driving and accidents.
Resumo:
Random breath testing (RBT) was introduced in South Australia in 1981 with the intention of reducing the incidence of accidents involving alcohol. In April 1985, a Select Committee of the Upper House which had been established to “review the operation of random breath testing in this State and any other associated matters and report accordingly” presented its report. After consideration of this report, the Government introduced extensive amendments to those sections of the Motor Vehicles Act (MVA) and Road Traffic Act (RTA) which deal with RBT and drink driving penalties. The amended section 47da of the RTA requires that: “(5) The Minister shall cause a report to be prepared within three months after the end of each calendar year on the operation and effectiveness of this section and related sections during that calendar year. (6) The Minister shall, within 12 sitting days after receipt of a report under subsection (5), cause copies of the report to be laid before each House of Parliament.” This is the first such report. Whilst it deals with RBT over a full year, the changed procedures and improved flexibility allowed by the revision to the RTA were only introduced late in 1985 and then only to the extent that the existing resources would allow.
Resumo:
Background Child maltreatment has severe short-and long-term consequences for children’s health, development, and wellbeing. Despite the provision of child protection education programs in many countries, few have been rigorously evaluated to determine their effectiveness. We describe the design of a multi-site gold standard evaluation of an Australian school-based child protection education program. The intervention has been developed by a not-for-profit agency and comprises 5 1-h sessions delivered to first grade students (aged 5–6 years) in their regular classrooms. It incorporates common attributes of effective programs identified in the literature, and aligns with the Australian education curriculum. Methods/Design A three-site cluster randomised controlled trial (RCT) of Learn to be safe with Emmy and friends™ will be conducted with children in approximately 72 first grade classrooms in 24 Queensland primary (elementary) schools from three state regions, over a period of 2 years. Entire schools will be randomised, using a computer generated list of random numbers, to intervention and wait-list control conditions, to prevent contamination effects across students and classes. Data will be collected at baseline (pre-assessment), immediately after the intervention (post-assessment), and at 6-, 12-, and 18-months (follow-up assessments). Outcome assessors will be blinded to group membership. Primary outcomes assessed are children’s knowledge of program concepts; intentions to use program knowledge, skills, and help-seeking strategies; actual use of program material in a simulated situation; and anxiety arising from program participation. Secondary outcomes include a parent discussion monitor, parent observations of their children’s use of program materials, satisfaction with the program, and parental stress. A process evaluation will be conducted concurrently to assess program performance. Discussion This RCT addresses shortcomings in previous studies and methodologically extends research in this area by randomising at school-level to prevent cross-learning between conditions; providing longer-term outcome assessment than any previous study; examining the degree to which parents/guardians discuss intervention content with children at home; assessing potential moderating/mediating effects of family and child demographic variables; testing an in-vivo measure to assess children’s ability to discriminate safe/unsafe situations and disclose to trusted adults; and testing enhancements to existing measures to establish greater internal consistency.
Resumo:
Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning. One of the advantages of DCNNs is their representation robustness to object locations, which is useful for object recognition tasks. However, this also discards spatial information, which is useful when dealing with topological information of the image (e.g. scene labeling, face recognition). In this paper, we propose a deeper and wider network architecture to tackle the scene labeling task. The depth is achieved by incorporating predictions from multiple early layers of the DCNN. The width is achieved by combining multiple outputs of the network. We then further refine the parsing task by adopting graphical models (GMs) as a post-processing step to incorporate spatial and contextual information into the network. The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.
Resumo:
As an extension to an activity introducing Year 5 students to the practice of statistics, the software TinkerPlots made it possible to collect repeated random samples from a finite population to informally explore students’ capacity to begin reasoning with a distribution of sample statistics. This article provides background for the sampling process and reports on the success of students in making predictions for the population from the collection of simulated samples and in explaining their strategies. The activity provided an application of the numeracy skill of using percentages, the numerical summary of the data, rather than graphing data in the analysis of samples to make decisions on a statistical question. About 70% of students made what were considered at least moderately good predictions of the population percentages for five yes–no questions, and the correlation between predictions and explanations was 0.78.