899 resultados para projection onto convex sets
Resumo:
As a precursor to the 2014 G20 Leaders’ Summit held in Brisbane, Australia, the Queensland Government sponsored a program of G20 Cultural Celebrations, designed to showcase the Summit’s host city. The cultural program’s signature event was the Colour Me Brisbane festival, a two-week ‘citywide interactive light and projection installations’ festival that was originally slated to run from 24 October to 9 November, but which was extended due to popular demand to conclude with the G20 Summit itself on 16 November. The Colour Me Brisbane festival comprised a series projection displays that promoted visions of the city’s past, present, and future at landmark sites and iconic buildings throughout the city’s central business district and thus transformed key buildings into forms of media architecture. In some instances the media architecture installations were interactive, allowing the public to control aspects of the projections through a computer interface situated in front of the building; however, the majority of the installations were not interactive in this sense. The festival was supported by a website that included information regarding the different visual and interactive displays and links to social media to support public discussion regarding the festival (Queensland Government 2014). Festival-goers were also encouraged to follow a walking-tour map of the projection sites that would take them on a 2.5 kilometre walk from Brisbane’s cultural precinct, through the city centre, concluding at parliament house. In this paper, we investigate the Colour Me Brisbane festival and the broader G20 Cultural Celebrations as a form of strategic placemaking—designed, on the one hand, to promote Brisbane as a safe, open, and accessible city in line with the City Council’s plan to position Brisbane as a ‘New World City’ (Brisbane City Council 2014). On the other hand, it was deployed to counteract growing local concerns and tensions over the disruptive and politicised nature of the G20 Summit by engaging the public with the city prior to the heightened security and mobility restrictions of the Summit weekend. Harnessing perspectives from media architecture (Brynskov et al. 2013), urban imaginaries (Cinar & Bender 2007), and social media analysis, we take a critical approach to analysing the government-sponsored projections, which literally projected the city onto itself, and public responses to them via the official, and heavily promoted, social media hashtags (#colourmebrisbane and #g20cultural). Our critical framework extends the concepts of urban phantasmagoria and urban imaginaries into the emerging field of media architecture to scrutinise its potential for increased political and civic engagement. Walter Benjamin’s concept of phantasmagoria (Cohen 1989; Duarte, Firmino, & Crestani 2014) provides an understanding of urban space as spectacular projection, implicated in commodity and techno-culture. The concept of urban imaginaries (Cinar & Bender 2007; Kelley 2013)—that is, the ways in which citizens’ experiences of urban environments are transformed into symbolic representations through the use of imagination—similarly provides a useful framing device in thinking about the Colour Me Brisbane projections and their relation to the construction of place. Employing these critical frames enables us to examine the ways in which the installations open up the potential for multiple urban imaginaries—in the sense that they encourage civic engagement via a tangible and imaginative experience of urban space—while, at the same time, supporting a particular vision and way of experiencing the city, promoting a commodified, sanctioned form of urban imaginary. This paper aims to dissect the urban imaginaries intrinsic to the Colour Me Brisbane projections and to examine how those imaginaries were strategically deployed as place-making schemes that choreograph reflections about and engagement with the city.
Resumo:
Introduction Vascular access devices (VADs), such as peripheral or central venous catheters, are vital across all medical and surgical specialties. To allow therapy or haemodynamic monitoring, VADs frequently require administration sets (AS) composed of infusion tubing, fluid containers, pressure-monitoring transducers and/or burettes. While VADs are replaced only when necessary, AS are routinely replaced every 3–4 days in the belief that this reduces infectious complications. Strong evidence supports AS use up to 4 days, but there is less evidence for AS use beyond 4 days. AS replacement twice weekly increases hospital costs and workload. Methods and analysis This is a pragmatic, multicentre, randomised controlled trial (RCT) of equivalence design comparing AS replacement at 4 (control) versus 7 (experimental) days. Randomisation is stratified by site and device, centrally allocated and concealed until enrolment. 6554 adult/paediatric patients with a central venous catheter, peripherally inserted central catheter or peripheral arterial catheter will be enrolled over 4 years. The primary outcome is VAD-related bloodstream infection (BSI) and secondary outcomes are VAD colonisation, AS colonisation, all-cause BSI, all-cause mortality, number of AS per patient, VAD time in situ and costs. Relative incidence rates of VAD-BSI per 100 devices and hazard rates per 1000 device days (95% CIs) will summarise the impact of 7-day relative to 4-day AS use and test equivalence. Kaplan-Meier survival curves (with log rank Mantel-Cox test) will compare VAD-BSI over time. Appropriate parametric or non-parametric techniques will be used to compare secondary end points. p Values of <0.05 will be considered significant.
Resumo:
This paper describes part of an engineering study that was undertaken to demonstrate that a multi-megawatt Photovoltaic (PV) generation system could be connected to a rural 11 kV feeder without creating power quality issues for other consumers. The paper concentrates solely on the voltage regulation aspect of the study as this was the most innovative part of the study. The study was carried out using the time-domain software package, PSCAD/EMTDC. The software model included real time data input of actual measured load and scaled PV generation data, along with real-time substation voltage regulator and PV inverter reactive power control. The outputs from the model plot real-time voltage, current and power variations throughout the daily load and PV generation variations. Other aspects of the study not described in the paper include the analysis of harmonics, voltage flicker, power factor, voltage unbalance and system losses.
Resumo:
Rapid advances in sequencing technologies (Next Generation Sequencing or NGS) have led to a vast increase in the quantity of bioinformatics data available, with this increasing scale presenting enormous challenges to researchers seeking to identify complex interactions. This paper is concerned with the domain of transcriptional regulation, and the use of visualisation to identify relationships between specific regulatory proteins (the transcription factors or TFs) and their associated target genes (TGs). We present preliminary work from an ongoing study which aims to determine the effectiveness of different visual representations and large scale displays in supporting discovery. Following an iterative process of implementation and evaluation, representations were tested by potential users in the bioinformatics domain to determine their efficacy, and to understand better the range of ad hoc practices among bioinformatics literate users. Results from two rounds of small scale user studies are considered with initial findings suggesting that bioinformaticians require richly detailed views of TF data, features to compare TF layouts between organisms quickly, and ways to keep track of interesting data points.
Resumo:
The development of techniques for scaling up classifiers so that they can be applied to problems with large datasets of training examples is one of the objectives of data mining. Recently, AdaBoost has become popular among machine learning community thanks to its promising results across a variety of applications. However, training AdaBoost on large datasets is a major problem, especially when the dimensionality of the data is very high. This paper discusses the effect of high dimensionality on the training process of AdaBoost. Two preprocessing options to reduce dimensionality, namely the principal component analysis and random projection are briefly examined. Random projection subject to a probabilistic length preserving transformation is explored further as a computationally light preprocessing step. The experimental results obtained demonstrate the effectiveness of the proposed training process for handling high dimensional large datasets.
Resumo:
Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.
Resumo:
Lateral or transaxial truncation of cone-beam data can occur either due to the field of view limitation of the scanning apparatus or iregion-of-interest tomography. In this paper, we Suggest two new methods to handle lateral truncation in helical scan CT. It is seen that reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view. A row-by-row data completion approach using linear prediction is introduced for helical scan truncated data. An extension of this technique known as windowed linear prediction approach is introduced. Efficacy of the two techniques are shown using simulation with standard phantoms. A quantitative image quality measure of the resulting reconstructed images are used to evaluate the performance of the proposed methods against an extension of a standard existing technique.
Resumo:
Domain-invariant representations are key to addressing the domain shift problem where the training and test exam- ples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be di- rectly suitable for such a comparison, since some of the fea- tures may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and tar- get domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a stan- dard domain adaptation benchmark dataset
Resumo:
Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.
Resumo:
We have evaluated the potential of a formulated diet as a replacement for live and fresh feeds for 7-day post-hatch Panulirus ornatus phyllosomata and also investigated the effect of conditioning phyllosomata for 14-21 days on live feeds prior to weaning onto a 100% formulated diet. In the first trial, the highest survival (>55%) was consistently shown by phyllosomata fed a diet consisting of a 50% combination of Artemia nauplii and 50% Greenshell mussel, followed by phyllosomata fed 50% Artemia nauplii and 50% formulated diet and, thirdly, by those receiving 100% Artemia nauplii. The second trial assessed the replacement of on-grown Artemia with proportions of formulated diet and Greenshell mussel that differed from those used in trial 1. Phyllosomata fed a 75% combination of formulated diet and 25% on-grown Artemia and 50% on-grown Artemia and 50% Greenshell mussel consistently showed the highest survival (>75%). Combinations of Greenshell mussel and formulated diet resulted in significantly (P < 0.05) reduced survival. In trial 3, phyllosomata were conditioned for 14, 18 or 21 days on Artemia nauplii prior to weaning onto a 100% formulated diet, which resulted in survival rates that were negatively related to the duration of feeding Artemia nauplii. In the final trial, phyllosomata were conditioned for 14 days on live on-grown Artemia prior to weaning onto one of three formulated diets (one diet with 44% CP and two diets with 50%). Phyllosomata fed a 44% CP diet consistently showed the highest survival (>35%) among all treatments, while those fed a 50%-squid CP diet showed a significant (P < 0.05) increase in mortality at day 24. The results of these trials demonstrate that hatcheries can potentially replace 75% of live on-grown Artemia with a formulated diet 7 days after hatch. The poor performance associated with feeding combinations of Greenshell mussel and formulated diet, and 100% formulated diet as well as conditioning phyllosomata for 14-21 days on live feeds prior to weaning onto a formulated diet highlights the importance of providing Artemia to stimulate feeding.
Resumo:
Background: Sorghum genome mapping based on DNA markers began in the early 1990s and numerous genetic linkage maps of sorghum have been published in the last decade, based initially on RFLP markers with more recent maps including AFLPs and SSRs and very recently, Diversity Array Technology (DArT) markers. It is essential to integrate the rapidly growing body of genetic linkage data produced through DArT with the multiple genetic linkage maps for sorghum generated through other marker technologies. Here, we report on the colinearity of six independent sorghum component maps and on the integration of these component maps into a single reference resource that contains commonly utilized SSRs, AFLPs, and high-throughput DArT markers. Results: The six component maps were constructed using the MultiPoint software. The lengths of the resulting maps varied between 910 and 1528 cM. The order of the 498 markers that segregated in more than one population was highly consistent between the six individual mapping data sets. The framework consensus map was constructed using a "Neighbours" approach and contained 251 integrated bridge markers on the 10 sorghum chromosomes spanning 1355.4 cM with an average density of one marker every 5.4 cM, and were used for the projection of the remaining markers. In total, the sorghum consensus map consisted of a total of 1997 markers mapped to 2029 unique loci ( 1190 DArT loci and 839 other loci) spanning 1603.5 cM and with an average marker density of 1 marker/0.79 cM. In addition, 35 multicopy markers were identified. On average, each chromosome on the consensus map contained 203 markers of which 58.6% were DArT markers. Non-random patterns of DNA marker distribution were observed, with some clear marker-dense regions and some marker-rare regions. Conclusion: The final consensus map has allowed us to map a larger number of markers than possible in any individual map, to obtain a more complete coverage of the sorghum genome and to fill a number of gaps on individual maps. In addition to overall general consistency of marker order across individual component maps, good agreement in overall distances between common marker pairs across the component maps used in this study was determined, using a difference ratio calculation. The obtained consensus map can be used as a reference resource for genetic studies in different genetic backgrounds, in addition to providing a framework for transferring genetic information between different marker technologies and for integrating DArT markers with other genomic resources. DArT markers represent an affordable, high throughput marker system with great utility in molecular breeding programs, especially in crops such as sorghum where SNP arrays are not publicly available.
Resumo:
Typhoid fever is becoming an ever increasing threat in the developing countries. We have improved considerably upon the existing PCR-based diagnosis method by designing primers against a region that is unique to Salmonella enterica subsp. enterica serovar Typhi and Salmonella enterica subsp. enterica serovar Paratyphi A, corresponding to the STY0312 gene in S. Typhi and its homolog SPA2476 in S. Paratyphi A. An additional set of primers amplify another region in S. Typhi CT18 and S. Typhi Ty2 corresponding to the region between genes STY0313 to STY0316 but which is absent in S. Paratyphi A. The possibility of a false-negative result arising due to mutation in hypervariable genes has been reduced by targeting a gene unique to typhoidal Salmonella serovars as a diagnostic marker. The amplified region has been tested for genomic stability by amplifying the region from clinical isolates of patients from various geographical locations in India, thereby showing that this region is potentially stable. These set of primers can also differentiate between S. Typhi CT18, S. Typhi Ty2, and S. Paratyphi A, which have stable deletions in this specific locus. The PCR assay designed in this study has a sensitivity of 95% compared to the Widal test which has a sensitivity of only 63%. As observed, in certain cases, the PCR assay was more sensitive than the blood culture test was, as the PCR-based detection could also detect dead bacteria.
Resumo:
A branch and bound type algorithm is presented in this paper to the problem of finding a transportation schedule which minimises the total transportation cost, where the transportation cost over each route is assumed to be a piecewice linear continuous convex function with increasing slopes. The algorithm is an extension of the work done by Balachandran and Perry, in which the transportation cost over each route is assumed to beapiecewise linear discontinuous function with decreasing slopes. A numerical example is solved illustrating the algorithm.
Resumo:
The topic of this dissertation lies in the intersection of harmonic analysis and fractal geometry. We particulary consider singular integrals in Euclidean spaces with respect to general measures, and we study how the geometric structure of the measures affects certain analytic properties of the operators. The thesis consists of three research articles and an overview. In the first article we construct singular integral operators on lower dimensional Sierpinski gaskets associated with homogeneous Calderón-Zygmund kernels. While these operators are bounded their principal values fail to exist almost everywhere. Conformal iterated function systems generate a broad range of fractal sets. In the second article we prove that many of these limit sets are porous in a very strong sense, by showing that they contain holes spread in every direction. In the following we connect these results with singular integrals. We exploit the fractal structure of these limit sets, in order to establish that singular integrals associated with very general kernels converge weakly. Boundedness questions consist a central topic of investigation in the theory of singular integrals. In the third article we study singular integrals of different measures. We prove a very general boundedness result in the case where the two underlying measures are separated by a Lipshitz graph. As a consequence we show that a certain weak convergence holds for a large class of singular integrals.