932 resultados para Weakly Compact Sets
Resumo:
The use of compact fluorescent lamps (CFLs) in domestic residences has increased rapidly due to their higher energy efficiency and longer life expectancy when compared with traditional incandescent light bulbs. Through measurement of illuminance, actual power and apparent power, the actual efficacy and associated power factor of CFLs are studied in this paper. It is found that for an individual CFL, although its power consumption and lighting output (i.e. luminous flux) may be higher or lower than the stated values provided by the lighting manufacturers, the actual efficacy would most likely be equal to or better than the efficacy calculated from the given rated power and lumen from the manufacturers. The typical power factor for CFLs was 0.63.
Resumo:
Compact arrays enable various applications such as antenna beam-forming and multi-input, multi-output (MIMO) schemes on limited-size platforms. The reduced element spacing in compact arrays introduces high levels of mutual coupling which can affect the performance of the adaptive array. This coupling causes a mismatch at the input ports, which disturbs the performance of the individual elements in the array and affects the implementation of beam steering. In this article, a reactive decoupling network for a 3-element monopole array is used to establish port isolation while simultaneously matching input impedance at each port to the system impendence. The integrated decoupling and matching network is incorporated in the ground plane of the monopole array, providing further development scope for beamforming using phase shifters and power splitters in double-layered circuits.
Resumo:
Introduction Vascular access devices (VADs), such as peripheral or central venous catheters, are vital across all medical and surgical specialties. To allow therapy or haemodynamic monitoring, VADs frequently require administration sets (AS) composed of infusion tubing, fluid containers, pressure-monitoring transducers and/or burettes. While VADs are replaced only when necessary, AS are routinely replaced every 3–4 days in the belief that this reduces infectious complications. Strong evidence supports AS use up to 4 days, but there is less evidence for AS use beyond 4 days. AS replacement twice weekly increases hospital costs and workload. Methods and analysis This is a pragmatic, multicentre, randomised controlled trial (RCT) of equivalence design comparing AS replacement at 4 (control) versus 7 (experimental) days. Randomisation is stratified by site and device, centrally allocated and concealed until enrolment. 6554 adult/paediatric patients with a central venous catheter, peripherally inserted central catheter or peripheral arterial catheter will be enrolled over 4 years. The primary outcome is VAD-related bloodstream infection (BSI) and secondary outcomes are VAD colonisation, AS colonisation, all-cause BSI, all-cause mortality, number of AS per patient, VAD time in situ and costs. Relative incidence rates of VAD-BSI per 100 devices and hazard rates per 1000 device days (95% CIs) will summarise the impact of 7-day relative to 4-day AS use and test equivalence. Kaplan-Meier survival curves (with log rank Mantel-Cox test) will compare VAD-BSI over time. Appropriate parametric or non-parametric techniques will be used to compare secondary end points. p Values of <0.05 will be considered significant.
Resumo:
Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A concentrated plasticity formulation suitable for practical advanced analysis of steel frame structures comprising non-compact sections is presented in this paper. This formulation, referred to as the refined plastic hinge method, implicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling.
Resumo:
Stability analyses have been widely used to better understand the mechanism of traffic jam formation. In this paper, we consider the impact of cooperative systems (a.k.a. connected vehicles) on traffic dynamics and, more precisely, on flow stability. Cooperative systems are emerging technologies enabling communication between vehicles and/or with the infrastructure. In a distributed communication framework, equipped vehicles are able to send and receive information to/from other equipped vehicles. Here, the effects of cooperative traffic are modeled through a general bilateral multianticipative car-following law that improves cooperative drivers' perception of their surrounding traffic conditions within a given communication range. Linear stability analyses are performed for a broad class of car-following models. They point out different stability conditions in both multianticipative and nonmultianticipative situations. To better understand what happens in unstable conditions, information on the shock wave structure is studied in the weakly nonlinear regime by the mean of the reductive perturbation method. The shock wave equation is obtained for generic car-following models by deriving the Korteweg de Vries equations. We then derive traffic-state-dependent conditions for the sign of the solitary wave (soliton) amplitude. This analytical result is verified through simulations. Simulation results confirm the validity of the speed estimate. The variation of the soliton amplitude as a function of the communication range is provided. The performed linear and weakly nonlinear analyses help justify the potential benefits of vehicle-integrated communication systems and provide new insights supporting the future implementation of cooperative systems.
Resumo:
Rapid advances in sequencing technologies (Next Generation Sequencing or NGS) have led to a vast increase in the quantity of bioinformatics data available, with this increasing scale presenting enormous challenges to researchers seeking to identify complex interactions. This paper is concerned with the domain of transcriptional regulation, and the use of visualisation to identify relationships between specific regulatory proteins (the transcription factors or TFs) and their associated target genes (TGs). We present preliminary work from an ongoing study which aims to determine the effectiveness of different visual representations and large scale displays in supporting discovery. Following an iterative process of implementation and evaluation, representations were tested by potential users in the bioinformatics domain to determine their efficacy, and to understand better the range of ad hoc practices among bioinformatics literate users. Results from two rounds of small scale user studies are considered with initial findings suggesting that bioinformaticians require richly detailed views of TF data, features to compare TF layouts between organisms quickly, and ways to keep track of interesting data points.
Resumo:
An accretion flow is necessarily transonic around a black hole.However, around a neutron star it may or may not be transonic, depending on the inner disk boundary conditions influenced by the neutron star. I will discuss various transonic behavior of the disk fluid in general relativistic (or pseudo general relativistic) framework. I will address that there are four types of sonic/critical point. possible to form in an accretion disk. It will be shown that how the fluid properties including location of sonic point's vary with angular momentum of the compact object which controls the overall disk dynamics and outflows.
Resumo:
Bioacoustic data can be used for monitoring animal species diversity. The deployment of acoustic sensors enables acoustic monitoring at large temporal and spatial scales. We describe a content-based birdcall retrieval algorithm for the exploration of large data bases of acoustic recordings. In the algorithm, an event-based searching scheme and compact features are developed. In detail, ridge events are detected from audio files using event detection on spectral ridges. Then event alignment is used to search through audio files to locate candidate instances. A similarity measure is then applied to dimension-reduced spectral ridge feature vectors. The event-based searching method processes a smaller list of instances for faster retrieval. The experimental results demonstrate that our features achieve better success rate than existing methods and the feature dimension is greatly reduced.
Resumo:
Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.
Resumo:
The work is based on the assumption that words with similar syntactic usage have similar meaning, which was proposed by Zellig S. Harris (1954,1968). We study his assumption from two aspects: Firstly, different meanings (word senses) of a word should manifest themselves in different usages (contexts), and secondly, similar usages (contexts) should lead to similar meanings (word senses). If we start with the different meanings of a word, we should be able to find distinct contexts for the meanings in text corpora. We separate the meanings by grouping and labeling contexts in an unsupervised or weakly supervised manner (Publication 1, 2 and 3). We are confronted with the question of how best to represent contexts in order to induce effective classifiers of contexts, because differences in context are the only means we have to separate word senses. If we start with words in similar contexts, we should be able to discover similarities in meaning. We can do this monolingually or multilingually. In the monolingual material, we find synonyms and other related words in an unsupervised way (Publication 4). In the multilingual material, we ?nd translations by supervised learning of transliterations (Publication 5). In both the monolingual and multilingual case, we first discover words with similar contexts, i.e., synonym or translation lists. In the monolingual case we also aim at finding structure in the lists by discovering groups of similar words, e.g., synonym sets. In this introduction to the publications of the thesis, we consider the larger background issues of how meaning arises, how it is quantized into word senses, and how it is modeled. We also consider how to define, collect and represent contexts. We discuss how to evaluate the trained context classi?ers and discovered word sense classifications, and ?nally we present the word sense discovery and disambiguation methods of the publications. This work supports Harris' hypothesis by implementing three new methods modeled on his hypothesis. The methods have practical consequences for creating thesauruses and translation dictionaries, e.g., for information retrieval and machine translation purposes. Keywords: Word senses, Context, Evaluation, Word sense disambiguation, Word sense discovery.
Resumo:
Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.
An investigation of bond formation in the weakly bound first excited 1Σ and lowest 3Σ states of HeH+
Resumo:
The role of the electronic kinetic energy and its Cartesian components is examined during the formation of the first excited 1�£ and the lowest 3�£ states of HeH+ employing wavefunctions of multi-configuration type with basis orbitals in elliptic coordinates. Results show that the bond formation in these states is preceded primarily by a charge transfer from H to He+ rather than by polarisation of the H-orbital by He+
Resumo:
Typhoid fever is becoming an ever increasing threat in the developing countries. We have improved considerably upon the existing PCR-based diagnosis method by designing primers against a region that is unique to Salmonella enterica subsp. enterica serovar Typhi and Salmonella enterica subsp. enterica serovar Paratyphi A, corresponding to the STY0312 gene in S. Typhi and its homolog SPA2476 in S. Paratyphi A. An additional set of primers amplify another region in S. Typhi CT18 and S. Typhi Ty2 corresponding to the region between genes STY0313 to STY0316 but which is absent in S. Paratyphi A. The possibility of a false-negative result arising due to mutation in hypervariable genes has been reduced by targeting a gene unique to typhoidal Salmonella serovars as a diagnostic marker. The amplified region has been tested for genomic stability by amplifying the region from clinical isolates of patients from various geographical locations in India, thereby showing that this region is potentially stable. These set of primers can also differentiate between S. Typhi CT18, S. Typhi Ty2, and S. Paratyphi A, which have stable deletions in this specific locus. The PCR assay designed in this study has a sensitivity of 95% compared to the Widal test which has a sensitivity of only 63%. As observed, in certain cases, the PCR assay was more sensitive than the blood culture test was, as the PCR-based detection could also detect dead bacteria.
Resumo:
Video surveillance infrastructure has been widely installed in public places for security purposes. However, live video feeds are typically monitored by human staff, making the detection of important events as they occur difficult. As such, an expert system that can automatically detect events of interest in surveillance footage is highly desirable. Although a number of approaches have been proposed, they have significant limitations: supervised approaches, which can detect a specific event, ideally require a large number of samples with the event spatially and temporally localised; while unsupervised approaches, which do not require this demanding annotation, can only detect whether an event is abnormal and not specific event types. To overcome these problems, we formulate a weakly-supervised approach using Kullback-Leibler (KL) divergence to detect rare events. The proposed approach leverages the sparse nature of the target events to its advantage, and we show that this data imbalance guarantees the existence of a decision boundary to separate samples that contain the target event from those that do not. This trait, combined with the coarse annotation used by weakly supervised learning (that only indicates approximately when an event occurs), greatly reduces the annotation burden while retaining the ability to detect specific events. Furthermore, the proposed classifier requires only a decision threshold, simplifying its use compared to other weakly supervised approaches. We show that the proposed approach outperforms state-of-the-art methods on a popular real-world traffic surveillance dataset, while preserving real time performance.