969 resultados para Cape Breton
Resumo:
The project aimed to detect exotic C"11coides species recently established in northern Australia and to map the distribution of Cullcoid"' bi'e\, nth'sis and C. 1.1-, oddiill Western Australia and NT. Between February 1990 and June 1992, collections were Inade throughout Cape York Peninsula, Nortlierii Territory and northern and central Western Australia. Six previously unreported species were collected. These species an'e considered unlikely to be recent jininigrants and seein to pose little threat as potential arboviiT. Is vectors. C. woddi was restricted to coastal 1101tlierii Qld, the northernmost areas of NT and the northern Kiinberley region in WA. 111 NT C. bi'evitai'sis was collected as far soutli as Katlierine. In WA it was collected throughout the Kiinberley and in the Pilbara region ill all area bounded by Nullagine, KanTatha and 300km nortli of Carnalvon. C. bi'evilcii'sis reinains tlie only Guncoide. s species of known 11npoitance as a vector of livestock an'boviruses to extend into Inajor sheep-grazing areas. Generally, CUIicoides distributions in northern Australia between 1990 and 1992 were coinparable but not identical to those defined ill surveys conducted ill tlie 1970's and 1980's. Species distributions were not static and will continue to fluctuate witli variation ill rainfall. . .
Resumo:
A possible mechanism for the resistance minimum in dilute alloys in which the localized impurity states are non-magnetic is suggested. The fact is considered that what is essential to the Kondo-like behaviour is the interaction of the conduction electron spin s with the internal dynamical degrees of freedom of the impurity centre. The necessary internal dynamical degrees of freedom are provided by the dynamical Jahn-Teller effect associated with the degenerate 3d-orbitals of the transition-metal impurities interacting with the surrounding (octahedral) complex of the nearest-neighbour atoms. The fictitious spin I characterizing certain low-lying vibronic states of the system is shown to couple with the conduction electron spin s via s-d mixing and spin-orbit coupling, giving rise to a singular temperature-dependent exchange-like interaction. The resistivity so calculated is in fair agreement with the experimental results of Cape and Hake for Ti containing 0.2 at% of Fe.
Resumo:
Digital and interactive technologies are becoming increasingly embedded in everyday lives of people around the world. Application of technologies such as real-time, context-aware, and interactive technologies; augmented and immersive realities; social media; and location-based services has been particularly evident in urban environments where technological and sociocultural infrastructures enable easier deployment and adoption as compared to non-urban areas. There has been growing consumer demand for new forms of experiences and services enabled through these emerging technologies. We call this ambient media, as the media is embedded in the natural human living environment. This workshop focuses on ambient media services, applications, and technologies that promote people’s engagement in creating and recreating liveliness in urban environments, particularly through arts, culture, and gastronomic experiences. The RelCi workshop series is organized in cooperation with the Queensland University of Technology (QUT), in particular the Urban Informatics Lab and the Tampere University of Technology (TUT), in particular the Entertainment and Media Management (EMMi) Lab. The workshop runs under the umbrella of the International Ambient Media Association (AMEA) (http://www.ambientmediaassociation.org), which is hosting the international open access journal entitled “International Journal on Information Systems and Management in Creative eMedia”, and the international open access series “International Series on Information Systems and Management in Creative eMedia” (see http://www.tut.fi/emmi/Journal). The RelCi workshop took place for the first time in 2012 in conjunction with ICME 2012 in Melbourne, Autralia; and this year’s edition took place in conjunction with INTERACT 2013 in Cape Town, South Africa. Besides, the International Ambient Media Association (AMEA) organizes the Semantic Ambient Media (SAME) workshop series, which took place in 2008 in conjunction with ACM Multimedia 2008 in Vancouver, Canada; in 2009 in conjunction with AmI 2009 in Salzburg, Austria; in 2010 in conjunction with AmI 2010 in Malaga, Spain; in 2011 in conjunction with Communities and Technologies 2011 in Brisbane, Australia; in 2012 in conjunction with Pervasive 2012 in Newcastle, UK; and in 2013 in conjunction with C&T 2013 in Munich, Germany.
Resumo:
Highly structured small peptides are the major toxic constituents of the venom of cone snails, a family of widely distributed predatory marine molluscs. These animals use the venom for rapid prey immobilization. The peptide components in the venom target a wide variety of membrane-bound ion channels and receptors. Many have been found to be highly selective for a diverse range of mammalian ion channels and receptors associated with pain-signaling pathways. Their small size, structural stability, and target specificity make them attractive pharmacologic agents. A select number of laboratories mainly from the United States, Europe, Australia, Israel, and China have been engaged in intense drug discovery programs based on peptides from a few snail species. Coastal India has an estimated 20-30% of the known cone species; however, few serious studies have been reported so far. We have begun a comprehensive program for the identification and characterization of peptides from cone snails found in Indian Coastal waters. This presentation reviews our progress over the last 2 years. As expected from the evolutionary history of these venom components, our search has yielded novel peptides of therapeutic promise from the new species that we have studied.
Resumo:
A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.
Resumo:
The paper describes the sensitivity of the simulated precipitation to changes in convective relaxation time scale (TAU) of Zhang and McFarlane (ZM) cumulus parameterization, in NCAR-Community Atmosphere Model version 3 (CAM3). In the default configuration of the model, the prescribed value of TAU, a characteristic time scale with which convective available potential energy (CAPE) is removed at an exponential rate by convection, is assumed to be 1 h. However, some recent observational findings suggest that, it is larger by around one order of magnitude. In order to explore the sensitivity of the model simulation to TAU, two model frameworks have been used, namely, aqua-planet and actual-planet configurations. Numerical integrations have been carried out by using different values of TAU, and its effect on simulated precipitation has been analyzed. The aqua-planet simulations reveal that when TAU increases, rate of deep convective precipitation (DCP) decreases and this leads to an accumulation of convective instability in the atmosphere. Consequently, the moisture content in the lower-and mid-troposphere increases. On the other hand, the shallow convective precipitation (SCP) and large-scale precipitation (LSP) intensify, predominantly the SCP, and thus capping the accumulation of convective instability in the atmosphere. The total precipitation (TP) remains approximately constant, but the proportion of the three components changes significantly, which in turn alters the vertical distribution of total precipitation production. The vertical structure of moist heating changes from a vertically extended profile to a bottom heavy profile, with the increase of TAU. Altitude of the maximum vertical velocity shifts from upper troposphere to lower troposphere. Similar response was seen in the actual-planet simulations. With an increase in TAU from 1 h to 8 h, there was a significant improvement in the simulation of the seasonal mean precipitation. The fraction of deep convective precipitation was in much better agreement with satellite observations.
Resumo:
High sensitivity detection techniques are required for indoor navigation using Global Navigation Satellite System (GNSS) receivers, and typically, a combination of coherent and non- coherent integration is used as the test statistic for detection. The coherent integration exploits the deterministic part of the signal and is limited due to the residual frequency error, navigation data bits and user dynamics, which are not known apriori. So, non- coherent integration, which involves squaring of the coherent integration output, is used to improve the detection sensitivity. Due to this squaring, it is robust against the artifacts introduced due to data bits and/or frequency error. However, it is susceptible to uncertainty in the noise variance, and this can lead to fundamental sensitivity limits in detecting weak signals. In this work, the performance of the conventional non-coherent integration-based GNSS signal detection is studied in the presence of noise uncertainty. It is shown that the performance of the current state of the art GNSS receivers is close to the theoretical SNR limit for reliable detection at moderate levels of noise uncertainty. Alternate robust post-coherent detectors are also analyzed, and are shown to alleviate the noise uncertainty problem. Monte-Carlo simulations are used to confirm the theoretical predictions.
Resumo:
In this paper, power management algorithms for energy harvesting sensors (EHS) that operate purely based on energy harvested from the environment are proposed. To maintain energy neutrality, EHS nodes schedule their utilization of the harvested power so as to save/draw energy into/from an inefficient battery during peak/low energy harvesting periods, respectively. Under this constraint, one of the key system design goals is to transmit as much data as possible given the energy harvesting profile. For implementational simplicity, it is assumed that the EHS transmits at a constant data rate with power control, when the channel is sufficiently good. By converting the data rate maximization problem into a convex optimization problem, the optimal load scheduling (power management) algorithm that maximizes the average data rate subject to energy neutrality is derived. Also, the energy storage requirements on the battery for implementing the proposed algorithm are calculated. Further, robust schemes that account for the insufficiency of battery storage capacity, or errors in the prediction of the harvested power are proposed. The superior performance of the proposed algorithms over conventional scheduling schemes are demonstrated through computations using numerical data from solar energy harvesting databases.
Resumo:
A wireless Energy Harvesting Sensor (EHS) needs to send data packets arriving in its queue over a fading channel at maximum possible throughput while ensuring acceptable packet delays. At the same time, it needs to ensure that energy neutrality is satisfied, i.e., the average energy drawn from a battery should equal the amount of energy deposited in it minus the energy lost due to the inefficiency of the battery. In this work, a framework is developed under which a system designer can optimize the performance of the EHS node using power control based on the current channel state information, when the EHS node employs a single modulation and coding scheme and the channel is Rayleigh fading. Optimal system parameters for throughput optimal, delay optimal and delay-constrained throughput optimal policies that ensure energy neutrality are derived. It is seen that a throughput optimal (maximal) policy is packet delay-unbounded and an average delay optimal (minimal) policy achieves negligibly small throughput. Finally, the influence of the harvested energy profile on the performance of the EHS is illustrated through the example of solar energy harvesting.
Resumo:
This paper considers the problem of spectrum sensing in cognitive radio networks when the primary user is using Orthogonal Frequency Division Multiplexing (OFDM). For this we develop cooperative sequential detection algorithms that use the autocorrelation property of cyclic prefix (CP) used in OFDM systems. We study the effect of timing and frequency offset, IQ-imbalance and uncertainty in noise and transmit power. We also modify the detector to mitigate the effects of these impairments. The performance of the proposed algorithms is studied via simulations. We show that sequential detection can significantly improve the performance over a fixed sample size detector.
Resumo:
Relay selection combined with buffering of packets of relays can substantially increase the throughput of a cooperative network that uses rateless codes. However, buffering also increases the end-to-end delays due to the additional queuing delays at the relay nodes. In this paper we propose a novel method that exploits a unique property of rateless codes that enables a receiver to decode a packet from non-contiguous and unordered portions of the received signal. In it, each relay, depending on its queue length, ignores its received coded bits with a given probability. We show that this substantially reduces the end-to-end delays while retaining almost all of the throughput gain achieved by buffering. In effect, the method increases the odds that the packet is first decoded by a relay with a smaller queue. Thus, the queuing load is balanced across the relays and traded off with transmission times. We derive explicit necessary and sufficient conditions for the stability of this system when the various channels undergo fading. Despite encountering analytically intractable G/GI/1 queues in our system, we also gain insights about the method by analyzing a similar system with a simpler model for the relay-to-destination transmission times.
Resumo:
Timer-based mechanisms are often used in several wireless systems to help a given (sink) node select the best helper node among many available nodes. Specifically, a node transmits a packet when its timer expires, and the timer value is a function of its local suitability metric. In practice, the best node gets selected successfully only if no other node's timer expires within a `vulnerability' window after its timer expiry. In this paper, we provide a complete closed-form characterization of the optimal metric-to-timer mapping that maximizes the probability of success for any probability distribution function of the metric. The optimal scheme is scalable, distributed, and much better than the popular inverse metric timer mapping. We also develop an asymptotic characterization of the optimal scheme that is elegant and insightful, and accurate even for a small number of nodes.
Resumo:
A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.