995 resultados para sensor classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Indo-West Pacific (IWP), from South Africa in the western Indian Ocean to the western Pacific Ocean, contains some of the most biologically diverse marine habitats on earth, including the greatest biodiversity of chondrichthyan fishes. The region encompasses various densities of human habitation leading to contrasts in the levels of exploitation experienced by chondrichthyans, which are targeted for local consumption and export. The demersal chondrichthyan, the zebra shark, Stegostoma fasciatum, is endemic to the IWP and has two current regional International Union for the Conservation of Nature (IUCN) Red List classifications that reflect differing levels of exploitation: ‘Least Concern’ and ‘Vulnerable’. In this study, we employed mitochondrial ND4 sequence data and 13 microsatellite loci to investigate the population genetic structure of 180 zebra sharks from 13 locations throughout the IWP to test the concordance of IUCN zones with demographic units that have conservation value. Mitochondrial and microsatellite data sets from samples collected throughout northern Australia and Southeast Asia concord with the regional IUCN classifications. However, we found evidence of genetic subdivision within these regions, including subdivision between locations connected by habitat suitable for migration. Furthermore, parametric FST analyses and Bayesian clustering analyses indicated that the primary genetic break within the IWP is not represented by the IUCN classifications but rather is congruent with the Indonesian throughflow current. Our findings indicate that recruitment to areas of high exploitation from nearby healthy populations in zebra sharks is likely to be minimal, and that severe localized depletions are predicted to occur in zebra shark populations throughout the IWP region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study sensor networks with energy harvesting nodes. The generated energy at a node can be stored in a buffer. A sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time at the node. For such networks we develop efficient energy management policies. First, for a single node, we obtain policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable suboptimal policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay. Next using the results for a single node, we develop efficient MAC policies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on the physiological response of crop plants to drying soils and subsequent water stress has grouped plant behaviours as isohydric and anisohydric. Drying soil conditions, and hence declining soil and root water potentials, cause chemical signals—the most studied being abscisic acid (ABA)—and hydraulic signals to be transmitted to the leaf via xylem pathways. Researchers have attempted to allocate crops as isohydric or anisohydric. However, different cultivars within crops, and even the same cultivars grown in different environments/climates, can exhibit both response types. Nevertheless, understanding which behaviours predominate in which crops and circumstances may be beneficial. This paper describes different physiological water stress responses, attempts to classify vegetable crops according to reported water stress responses, and also discusses implications for irrigation decision-making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hereditary nonpolyposis colorectal cancer (HNPCC) is the most common known clearly hereditary cause of colorectal and endometrial cancer (CRC and EC). Dominantly inherited mutations in one of the known mismatch repair (MMR) genes predispose to HNPCC. Defective MMR leads to an accumulation of mutations especially in repeat tracts, presenting microsatellite instability. HNPCC is clinically a very heterogeneous disease. The age at onset varies and the target tissue may vary. In addition, families that fulfill the diagnostic criteria for HNPCC but fail to show any predisposing mutation in MMR genes exist. Our aim was to evaluate the genetic background of familial CRC and EC. We performed comprehensive molecular and DNA copy number analyses of CRCs fulfilling the diagnostic criteria for HNPCC. We studied the role of five pathways (MMR, Wnt, p53, CIN, PI3K/AKT) and divided the tumors into two groups, one with MMR gene germline mutations and the other without. We observed that MMR proficient familial CRC consist of two molecularly distinct groups that differ from MMR deficient tumors. Group A shows paucity of common molecular and chromosomal alterations characteristic of colorectal carcinogenesis. Group B shows molecular features similar to classical microsatellite stable tumors with gross chromosomal alterations. Our finding of a unique tumor profile in group A suggests the involvement of novel predisposing genes and pathways in colorectal cancer cohorts not linked to MMR gene defects. We investigated the genetic background of familial ECs. Among 22 families with clustering of EC, two (9%) were due to MMR gene germline mutations. The remaining familial site-specific ECs are largely comparable with HNPCC associated ECs, the main difference between these groups being MMR proficiency vs. deficiency. We studied the role of PI3K/AKT pathway in familial ECs as well and observed that PIK3CA amplifications are characteristic of familial site-specific EC without MMR gene germline mutations. Most of the high-level amplifications occurred in tumors with stable microsatellites, suggesting that these tumors are more likely associated with chromosomal rather than microsatellite instability and MMR defect. The existence of site-specific endometrial carcinoma as a separate entity remains equivocal until predisposing genes are identified. It is possible that no single highly penetrant gene for this proposed syndrome exists, it may, for example be due to a combination of multiple low penetrance genes. Despite advances in deciphering the molecular genetic background of HNPCC, it is poorly understood why certain organs are more susceptible than others to cancer development. We found that important determinants of the HNPCC tumor spectrum are, in addition to different predisposing germline mutations, organ specific target genes and different instability profiles, loss of heterozygosity at MLH1 locus, and MLH1 promoter methylation. This study provided more precise molecular classification of families with CRC and EC. Our observations on familial CRC and EC are likely to have broader significance that extends to sporadic CRC and EC as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laboratory confirmation methods are important in bovine cysticerosis diagnosis as other pathologies can result in morphologically similar lesions resulting in false identifications. We developed a probe-based real-time PCR assay to identify Taenia saginata in suspect cysts encountered at meat inspection and compared its use with the traditional method of identification, histology, as well as a published nested PCR. The assay simultaneously detects T. saginata DNA and a bovine internal control using the cytochrome c oxidase subunit 1 gene of each species and shows specificity against parasites causing lesions morphologically similar to those of T. saginata. The assay was sufficiently sensitive to detect 1 fg (Ct 35.09 +/- 0.95) of target DNA using serially-diluted plasmid DNA in reactions spiked with bovine DNA as well as in all viable and caseated positive control cysts. A loss in PCR sensitivity was observed with increasing cyst degeneration as seen in other molecular methods. In comparison to histology, the assay offered greater sensitivity and accuracy with 10/19 (53%) T. saginata positives detected by real-time PCR and none by histology. When the results were compared with the reference PCR, the assay was less sensitive but offered advantages of faster turnaround times and reduced contamination risk. Estimates of the assay's repeatability and reproducibility showed the assay is highly reliable with reliability coefficients greater than 0.94. Crown Copyright (C) 2013 Published by Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new fibre-optic sensor, based on the speckle phenomenon, for the measurement of current is described. The technique has the advantages of simplicity and sensitivity, but requires a two-step measurement procedure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmental changes have put great pressure on biological systems leading to the rapid decline of biodiversity. To monitor this change and protect biodiversity, animal vocalizations have been widely explored by the aid of deploying acoustic sensors in the field. Consequently, large volumes of acoustic data are collected. However, traditional manual methods that require ecologists to physically visit sites to collect biodiversity data are both costly and time consuming. Therefore it is essential to develop new semi-automated and automated methods to identify species in automated audio recordings. In this study, a novel feature extraction method based on wavelet packet decomposition is proposed for frog call classification. After syllable segmentation, the advertisement call of each frog syllable is represented by a spectral peak track, from which track duration, dominant frequency and oscillation rate are calculated. Then, a k-means clustering algorithm is applied to the dominant frequency, and the centroids of clustering results are used to generate the frequency scale for wavelet packet decomposition (WPD). Next, a new feature set named adaptive frequency scaled wavelet packet decomposition sub-band cepstral coefficients is extracted by performing WPD on the windowed frog calls. Furthermore, the statistics of all feature vectors over each windowed signal are calculated for producing the final feature set. Finally, two well-known classifiers, a k-nearest neighbour classifier and a support vector machine classifier, are used for classification. In our experiments, we use two different datasets from Queensland, Australia (18 frog species from commercial recordings and field recordings of 8 frog species from James Cook University recordings). The weighted classification accuracy with our proposed method is 99.5% and 97.4% for 18 frog species and 8 frog species respectively, which outperforms all other comparable methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new rock mass classification scheme, the Host Rock Classification system (HRC-system) has been developed for evaluating the suitability of volumes of rock mass for the disposal of high-level nuclear waste in Precambrian crystalline bedrock. To support the development of the system, the requirements of host rock to be used for disposal have been studied in detail and the significance of the various rock mass properties have been examined. The HRC-system considers both the long-term safety of the repository and the constructability in the rock mass. The system is specific to the KBS-3V disposal concept and can be used only at sites that have been evaluated to be suitable at the site scale. By using the HRC-system, it is possible to identify potentially suitable volumes within the site at several different scales (repository, tunnel and canister scales). The selection of the classification parameters to be included in the HRC-system is based on an extensive study on the rock mass properties and their various influences on the long-term safety, the constructability and the layout and location of the repository. The parameters proposed for the classification at the repository scale include fracture zones, strength/stress ratio, hydraulic conductivity and the Groundwater Chemistry Index. The parameters proposed for the classification at the tunnel scale include hydraulic conductivity, Q´ and fracture zones and the parameters proposed for the classification at the canister scale include hydraulic conductivity, Q´, fracture zones, fracture width (aperture + filling) and fracture trace length. The parameter values will be used to determine the suitability classes for the volumes of rock to be classified. The HRC-system includes four suitability classes at the repository and tunnel scales and three suitability classes at the canister scale and the classification process is linked to several important decisions regarding the location and acceptability of many components of the repository at all three scales. The HRC-system is, thereby, one possible design tool that aids in locating the different repository components into volumes of host rock that are more suitable than others and that are considered to fulfil the fundamental requirements set for the repository host rock. The generic HRC-system, which is the main result of this work, is also adjusted to the site-specific properties of the Olkiluoto site in Finland and the classification procedure is demonstrated by a test classification using data from Olkiluoto. Keywords: host rock, classification, HRC-system, nuclear waste disposal, long-term safety, constructability, KBS-3V, crystalline bedrock, Olkiluoto

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

20.00% 20.00%

Publicador: