97 resultados para random projection
Resumo:
Submarine groundwater discharge (SGD) is an integral part of the hydrological cycle and represents an important aspect of land-ocean interactions. We used a numerical model to simulate flow and salt transport in a nearshore groundwater aquifer under varying wave conditions based on yearlong random wave data sets, including storm surge events. The results showed significant flow asymmetry with rapid response of influxes and retarded response of effluxes across the seabed to the irregular wave conditions. While a storm surge immediately intensified seawater influx to the aquifer, the subsequent return of intruded seawater to the sea, as part of an increased SGD, was gradual. Using functional data analysis, we revealed and quantified retarded, cumulative effects of past wave conditions on SGD including the fresh groundwater and recirculating seawater discharge components. The retardation was characterized well by a gamma distribution function regardless of wave conditions. The relationships between discharge rates and wave parameters were quantifiable by a regression model in a functional form independent of the actual irregular wave conditions. This statistical model provides a useful method for analyzing and predicting SGD from nearshore unconfined aquifers affected by random waves
Resumo:
Domain-invariant representations are key to addressing the domain shift problem where the training and test exam- ples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be di- rectly suitable for such a comparison, since some of the fea- tures may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and tar- get domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a stan- dard domain adaptation benchmark dataset
Resumo:
Random walk models are often used to interpret experimental observations of the motion of biological cells and molecules. A key aim in applying a random walk model to mimic an in vitro experiment is to estimate the Fickian diffusivity (or Fickian diffusion coefficient),D. However, many in vivo experiments are complicated by the fact that the motion of cells and molecules is hindered by the presence of obstacles. Crowded transport processes have been modeled using repeated stochastic simulations in which a motile agent undergoes a random walk on a lattice that is populated by immobile obstacles. Early studies considered the most straightforward case in which the motile agent and the obstacles are the same size. More recent studies considered stochastic random walk simulations describing the motion of an agent through an environment populated by obstacles of different shapes and sizes. Here, we build on previous simulation studies by analyzing a general class of lattice-based random walk models with agents and obstacles of various shapes and sizes. Our analysis provides exact calculations of the Fickian diffusivity, allowing us to draw conclusions about the role of the size, shape and density of the obstacles, as well as examining the role of the size and shape of the motile agent. Since our analysis is exact, we calculateDdirectly without the need for random walk simulations. In summary, we find that the shape, size and density of obstacles has a major influence on the exact Fickian diffusivity. Furthermore, our results indicate that the difference in diffusivity for symmetric and asymmetric obstacles is significant.
Resumo:
• In December 1986 funds were approved to double the intensity of random breath testing (RBT) and provide publicity support for police efforts. These changes were considered necessary to make RBT effective. • RBT methods were changed in the metropolitan area to enable block testing (pulling over a block of traffic rather than one or two cars), deployment of police to cut off escape routes, and testing by traffic patrols in all police subdivisions. Additional operators were trained for country RBT. • A publicity campaign was developed, aimed mainly at male drivers aged 18-50. The campaign consisted of the “cardsharp” television commercials, radio commercials, newspaper articles, posters and pamphlets. • Increased testing and the publicity campaigns were launched on 10 April 1987. • Police tests increased by 92.5% in May – December 1987, compared with the same period in the previous four years. • The detection rate for drinking drivers picked up by police who were cutting off escape routes was comparatively high, indicating that drivers were attempting to avoid RBT, and that this police method was effective at detecting these drivers. • A telephone survey indicated that drivers were aware of the messages of the publicity campaign. • The telephone survey also indicated that the target group had been exposed to high levels of RBT, as planned, and that fear of apprehension was the major factor deterring them from drink driving. • A roadside survey of driver blood alcohol concentrations (BACs) by the University of Adelaide’s Road Accident Research Unit (RARU) showed that, between 10p.m. and 3a.m., the proportion of drivers in Adelaide with a BAC greater than or equal to 0/08 decreased by 42%. • Drivers under 21 were identified as a possible problem area. • Fatalities in the twelve month period commencing May 1987 decreased by 18% in comparison with the previous twelve month period, and by 13% in comparison with the average of the previous two twelve month periods (commencing May 1985 and May 1986). There are indications that this trend is continuing. • It is concluded that the increase in RBT, plus publicity, was successful in achieving its aims of reductions in drink driving and accidents.
Resumo:
Random breath testing (RBT) was introduced in South Australia in 1981 with the intention of reducing the incidence of accidents involving alcohol. In April 1985, a Select Committee of the Upper House which had been established to “review the operation of random breath testing in this State and any other associated matters and report accordingly” presented its report. After consideration of this report, the Government introduced extensive amendments to those sections of the Motor Vehicles Act (MVA) and Road Traffic Act (RTA) which deal with RBT and drink driving penalties. The amended section 47da of the RTA requires that: “(5) The Minister shall cause a report to be prepared within three months after the end of each calendar year on the operation and effectiveness of this section and related sections during that calendar year. (6) The Minister shall, within 12 sitting days after receipt of a report under subsection (5), cause copies of the report to be laid before each House of Parliament.” This is the first such report. Whilst it deals with RBT over a full year, the changed procedures and improved flexibility allowed by the revision to the RTA were only introduced late in 1985 and then only to the extent that the existing resources would allow.
Resumo:
Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning. One of the advantages of DCNNs is their representation robustness to object locations, which is useful for object recognition tasks. However, this also discards spatial information, which is useful when dealing with topological information of the image (e.g. scene labeling, face recognition). In this paper, we propose a deeper and wider network architecture to tackle the scene labeling task. The depth is achieved by incorporating predictions from multiple early layers of the DCNN. The width is achieved by combining multiple outputs of the network. We then further refine the parsing task by adopting graphical models (GMs) as a post-processing step to incorporate spatial and contextual information into the network. The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.
Resumo:
As an extension to an activity introducing Year 5 students to the practice of statistics, the software TinkerPlots made it possible to collect repeated random samples from a finite population to informally explore students’ capacity to begin reasoning with a distribution of sample statistics. This article provides background for the sampling process and reports on the success of students in making predictions for the population from the collection of simulated samples and in explaining their strategies. The activity provided an application of the numeracy skill of using percentages, the numerical summary of the data, rather than graphing data in the analysis of samples to make decisions on a statistical question. About 70% of students made what were considered at least moderately good predictions of the population percentages for five yes–no questions, and the correlation between predictions and explanations was 0.78.