937 resultados para Threshold crypto-graphic schemes and algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectrum sensing is currently one of the most challenging design problems in cognitive radio. A robust spectrum sensing technique is important in allowing implementation of a practical dynamic spectrum access in noisy and interference uncertain environments. In addition, it is desired to minimize the sensing time, while meeting the stringent cognitive radio application requirements. To cope with this challenge, cyclic spectrum sensing techniques have been proposed. However, such techniques require very high sampling rates in the wideband regime and thus are costly in hardware implementation and power consumption. In this thesis the concept of compressed sensing is applied to circumvent this problem by utilizing the sparsity of the two-dimensional cyclic spectrum. Compressive sampling is used to reduce the sampling rate and a recovery method is developed for re- constructing the sparse cyclic spectrum from the compressed samples. The reconstruction solution used, exploits the sparsity structure in the two-dimensional cyclic spectrum do-main which is different from conventional compressed sensing techniques for vector-form sparse signals. The entire wideband cyclic spectrum is reconstructed from sub-Nyquist-rate samples for simultaneous detection of multiple signal sources. After the cyclic spectrum recovery two methods are proposed to make spectral occupancy decisions from the recovered cyclic spectrum: a band-by-band multi-cycle detector which works for all modulation schemes, and a fast and simple thresholding method that works for Binary Phase Shift Keying (BPSK) signals only. In addition a method for recovering the power spectrum of stationary signals is developed as a special case. Simulation results demonstrate that the proposed spectrum sensing algorithms can significantly reduce sampling rate without sacrifcing performance. The robustness of the algorithms to the noise uncertainty of the wireless channel is also shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon nanotubes (CNTs) are interesting materials with extraordinary properties for various applications. Here, vertically-aligned multiwalled CNTs (VA-MWCNTs) are grown by our dual radio frequency plasma enhanced chemical vapor deposition (PECVD). After optimizing the synthesis processes, these VA-MWCNTs were fabricated in to a series of devices for applications in vacuum electronics, glucose biosensors, glucose biofuel cells, and supercapacitors In particular, we have created the so-called PMMA-CNT matrices (opened-tip CNTs embedded in poly-methyl methacrylate) that are promising components in a novel energy sensing, generation and storage (SGS) system that integrate glucose biosensors, biofuel cells, and supercapacitors. The content of this thesis work is described as follows: 1. We have first optimized the synthesis of VA-MWCNTs by our PECVD technique. The effects of CH4 flow rate and growth duration on the lengths of these CNTs were studied. 2. We have characterized these VA-MWCNTs for electron field emission. We noticed that as grown CNTs suffers from high emission threshold, poor emission density and poor long-term stability. We attempted a series of experiments to understand ways to overcome these problems. First, we decrease the screening effects on VA-MWCNTs by creating arrays of self-assembled CNT bundles that are catalyst-free and opened tips. These bundles are found to enhance the field emission stability and emission density. Subsequently, we have created PMMA-CNT matrices that are excellent electron field emitters with an emission threshold field of more than two-fold lower than that of the as-grown sample. Furthermore, no significant emission degradation was observed after a continuous emission test of 40 hours (versus much shorter tests in reported literatures). Based on the new understanding we learnt from the PMMA-CNT matrices, we further created PMMA-STO-CNT matrices by embedding opened-tip VA-MWCNTs that are coated with strontium titanate (SrTiO3) with PMMA. We found that the PMMA-STO-CNT matrices have all the desired properties of the PMMA-CNT matrices. Furthermore, PMMA-STO-CNT matrices offer much lower emission threshold field, about five-fold lower than that of as grown VA-MWCNTs. The new understandings we obtained are important for practical application of VA-MWCNTs in field emission devices. 3. Subsequently, we have functionalized PMMA-CNT matrices for glucose biosensing. Our biosensor was developed by immobilized glucose oxidase (GOχ) on the opened-tip CNTs exposed on the matrices. The durability, stability and sensitivity of the biosensor were studied. In order to understand the performance of miniaturized glucose biosensors, we have then investigated the effect of working electrode area on the sensitivity and current level of our biosensors. 4. Next, functionalized PMMA-CNT matrices were utilized for energy generation and storage. We found that PMMA-CNT matrices are promising component in glucose/O2 biofuel cells (BFCs) for energy generation. The construction of these BFCs and the effect of the electrode area on the power density of these BFCs were investigated. Then, we have attempted to use PMMA-CNT matrices as supercapacitors for energy storage devices. The performance of these supercapacitors and ways to enhance their performance are discussed. 5. Finally, we further evaluated the concept of energy SGS system that integrated glucose biosensors, biofuel cells, and supercapacitors. This SGS system may be implantable to monitor and control the blood glucose level in our body.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In developed countries, the transition from school to work has radically changed over the past two decades. It has become prolonged, complicated and individualized (Bynner et al., 1997; Walther et al., 2004). Young people used to transition directly from school to stable employment, or with a very short unemployed period. In many European countries, this situation has been changing since the eighties: overall youth unemployment has increased, and many young people experience long periods of unemployment, government training schemes and part-time or temporary jobs. In Japan, this change has taken a decade later to appear, becoming prevalent by the late nineties (Inui, 2003). The transiting process has become not only precarious for young people, but also difficult for society to precisely understand the risks and problems. Traditionally, we have been able to recognize young people's situation by a simple category: in education, employed, in training or unemployed. However, these categories no longer accurately represent young people's state. In Japan, most young people used to move from school directly to full-time employment through the new graduate recruitment system (Inui, 1993). Therefore, in official statistics such as the School Basic Survey, 'employed' includes only those who are in regular employment, while those who are in part-time or temporary work are covered by the categories 'jobless' and 'others'. However, with the increase in non-full-time jobs in the nineties, these categories have become less useful for describing the actual employment conditions of young people. Indeed, this is why, in the late of nineties, the Japanese Ministry of Education changed the category name from 'jobless' to 'others'.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to assess the performance of International Caries Detection and Assessment System (ICDAS), radiographic examination, and fluorescence-based methods for detecting occlusal caries in primary teeth. One occlusal site on each of 79 primary molars was assessed twice by two examiners using ICDAS, bitewing radiography (BW), DIAGNOdent 2095 (LF), DIAGNOdent 2190 (LFpen), and VistaProof fluorescence camera (FC). The teeth were histologically prepared and assessed for caries extent. Optimal cutoff limits were calculated for LF, LFpen, and FC. At the D (1) threshold (enamel and dentin lesions), ICDAS and FC presented higher sensitivity values (0.75 and 0.73, respectively), while BW showed higher specificity (1.00). At the D (2) threshold (inner enamel and dentin lesions), ICDAS presented higher sensitivity (0.83) and statistically significantly lower specificity (0.70). At the D(3) threshold (dentin lesions), LFpen and FC showed higher sensitivity (1.00 and 0.91, respectively), while higher specificity was presented by FC (0.95), ICDAS (0.94), BW (0.94), and LF (0.92). The area under the receiver operating characteristic (ROC) curve (Az) varied from 0.780 (BW) to 0.941 (LF). Spearman correlation coefficients with histology were 0.72 (ICDAS), 0.64 (BW), 0.71 (LF), 0.65 (LFpen), and 0.74 (FC). Inter- and intraexaminer intraclass correlation values varied from 0.772 to 0.963 and unweighted kappa values ranged from 0.462 to 0.750. In conclusion, ICDAS and FC exhibited better accuracy in detecting enamel and dentin caries lesions, whereas ICDAS, LF, LFpen, and FC were more appropriate for detecting dentin lesions on occlusal surfaces in primary teeth, with no statistically significant difference among them. All methods presented good to excellent reproducibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ActiGraph accelerometer is commonly used to measure physical activity in children. Count cut-off points are needed when using accelerometer data to determine the time a person spent in moderate or vigorous physical activity. For the GT3X accelerometer no cut-off points for young children have been published yet. The aim of the current study was thus to develop and validate count cut-off points for young children. Thirty-two children aged 5 to 9 years performed four locomotor and four play activities. Activity classification into the light-, moderate- or vigorous-intensity category was based on energy expenditure measurements with indirect calorimetry. Vertical axis as well as vector magnitude cut-off points were determined through receiver operating characteristic curve analyses with the data of two thirds of the study group and validated with the data of the remaining third. The vertical axis cut-off points were 133 counts per 5 sec for moderate to vigorous physical activity (MVPA), 193 counts for vigorous activity (VPA) corresponding to a metabolic threshold of 5 MET and 233 for VPA corresponding to 6 MET. The vector magnitude cut-off points were 246 counts per 5 sec for MVPA, 316 counts for VPA - 5 MET and 381 counts for VPA - 6 MET. When validated, the current cut-off points generally showed high recognition rates for each category, high sensitivity and specificity values and moderate agreement in terms of the Kappa statistic. These results were similar for vertical axis and vector magnitude cut-off points. The current cut-off points adequately reflect MVPA and VPA in young children. Cut-off points based on vector magnitude counts did not appear to reflect the intensity categories better than cut-off points based on vertical axis counts alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study electroweak Sudakov effects in single W, Z and γ production at large transverse momentum using soft collinear effective theory. We present a factorized form of the cross section near the partonic threshold with both QCD and electroweak effects included and compute the electroweak corrections arising at different scales. We analyze their size relative to the QCD corrections as well as the impact of strong-electroweak mixing terms. Numerical results for the vector-boson cross sections at the Large Hadron Collider are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantification of the volumes of sediment removed by rock–slope failure and debris flows and identification of their coupling and controls are pertinent to understanding mountain basin sediment yield and landscape evolution. This study captures a multi-decadal period of hillslope erosion and channel change following an extreme rock avalanche in 1961 in the Illgraben, a catchment prone to debris flows in the Swiss Alps. We analyzed photogrammetrically-derived datasets of hillslope and channel erosion and deposition along with climatic and seismic variables for a 43 year period from 1963 to 2005. Based on these analyses we identify and discuss (1) patterns of hillslope production, channel transfer and catchment sediment yield, (2) their dominant interactions with climatic and seismic variables, and (3) the nature of hillslope–channel coupling and implications for sediment yield and landscape evolution in this mountain basin. Our results show an increase in the mean hillslope erosion rate in the 1980s from 0.24 ± 0.01 m yr− 1 to 0.42 ± 0.03 m yr− 1 that coincided with a significant increase in air temperature and decrease in snow cover depth and duration, which we presume led to an increase in the exposure of the slopes to thermal weathering processes. The combination of highly fractured slopes close to the threshold angle for failure, and multiple potential triggering mechanisms, means that it is difficult to identify an individual control on slope failure. On the other hand, the rate of channel change was strongly related to variables influencing runoff. A period of particularly high channel erosion rate of 0.74 ± 0.02 m yr− 1 (1992–1998) coincided with an increase in the frequency and magnitude of intense rainfall events. Hillslope erosion exceeded channel erosion on average, indicative of a downslope-directed coupling relationship between hillslope and channel, and demonstrating the first order control of rock–slope failure on catchment sediment yield and landscape evolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-throughput assays, such as yeast two-hybrid system, have generated a huge amount of protein-protein interaction (PPI) data in the past decade. This tremendously increases the need for developing reliable methods to systematically and automatically suggest protein functions and relationships between them. With the available PPI data, it is now possible to study the functions and relationships in the context of a large-scale network. To data, several network-based schemes have been provided to effectively annotate protein functions on a large scale. However, due to those inherent noises in high-throughput data generation, new methods and algorithms should be developed to increase the reliability of functional annotations. Previous work in a yeast PPI network (Samanta and Liang, 2003) has shown that the local connection topology, particularly for two proteins sharing an unusually large number of neighbors, can predict functional associations between proteins, and hence suggest their functions. One advantage of the work is that their algorithm is not sensitive to noises (false positives) in high-throughput PPI data. In this study, we improved their prediction scheme by developing a new algorithm and new methods which we applied on a human PPI network to make a genome-wide functional inference. We used the new algorithm to measure and reduce the influence of hub proteins on detecting functionally associated proteins. We used the annotations of the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) as independent and unbiased benchmarks to evaluate our algorithms and methods within the human PPI network. We showed that, compared with the previous work from Samanta and Liang, our algorithm and methods developed in this study improved the overall quality of functional inferences for human proteins. By applying the algorithms to the human PPI network, we obtained 4,233 significant functional associations among 1,754 proteins. Further comparisons of their KEGG and GO annotations allowed us to assign 466 KEGG pathway annotations to 274 proteins and 123 GO annotations to 114 proteins with estimated false discovery rates of <21% for KEGG and <30% for GO. We clustered 1,729 proteins by their functional associations and made pathway analysis to identify several subclusters that are highly enriched in certain signaling pathways. Particularly, we performed a detailed analysis on a subcluster enriched in the transforming growth factor β signaling pathway (P<10-50) which is important in cell proliferation and tumorigenesis. Analysis of another four subclusters also suggested potential new players in six signaling pathways worthy of further experimental investigations. Our study gives clear insight into the common neighbor-based prediction scheme and provides a reliable method for large-scale functional annotations in this post-genomic era.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The age of the subducting Nazca Plate off Chile increases northwards from 0 Ma at the Chile Triple Junction (46°S) to 37 Ma at the latitude of Valparaíso (32°S). Age-related variations in the thermal state of the subducting plate impact on (a) the water influx to the subduction zone, as well as on (b) the volumes of water that are released under the continental forearc or, alternatively, carried beyond the arc. Southern Central Chile is an ideal setting to study this effect, because other factors for the subduction zone water budget appear constant. We determine the water influx by calculating the crustal water uptake and by modeling the upper mantle serpentinization at the outer rise of the Chile Trench. The water release under forearc and arc is determined by coupling FEM thermal models of the subducting plate with stability fields of water-releasing mineral reactions for upper and lower crust and hydrated mantle. Results show that both the influx of water stored in, and the outflux of water released from upper crust, lower crust and mantle vary drastically over segment boundaries. In particular, the oldest and coldest segments carry roughly twice as much water into the subduction zone as the youngest and hottest segments, but their release flux to the forearc is only about one fourth of the latter. This high variability over a subduction zone of < 1500 km length shows that it is insufficient to consider subduction zones as uniform entities in global estimates of subduction zone fluxes. This article is protected by copyright. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I tested the hypothesis that high pCO2 (76.6 Pa and 87.2 Pa vs. 42.9 Pa) has no effect on the metabolism of juvenile massive Porites spp. after 11 days at 28 °C and 545 µmol quanta/m**2/s. The response was assessed as aerobic dark respiration, skeletal weight (i.e., calcification), biomass, and chlorophyll fluorescence. Corals were collected from the shallow (3-4 m) back reef of Moorea, French Polynesia (17°28.614'S, 149°48.917'W), and experiments conducted during April and May 2011. An increase in pCO2 to 76.6 Pa had no effect on any dependent variable, but 87.2 Pa pCO2 reduced area-normalized (but not biomass-normalized) respiration 36 %, as well as maximum photochemical efficiency (Fv/Fm) of open RCIIs and effective photochemical efficiency of RCIIs in actinic light (Delta F/F'm ); neither biomass, calcification, nor the energy expenditure coincident with calcification (J/g) was effected. These results do not support the hypothesis that high pCO2 reduces coral calcification through increased metabolic costs and, instead, suggest that high pCO2 causes metabolic depression and photochemical impairment similar to that associated with bleaching. Evidence of a pCO2 threshold between 76.6 and 87.2 Pa for inhibitory effects on respiration and photochemistry deserves further attention as it might signal the presence of unpredictable effects of rising pCO2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The severity of the impact of elevated atmospheric pCO2 to coral reef ecosystems depends, in part, on how seawater pCO2 affects the balance between calcification and dissolution of carbonate sediments. Presently, there are insufficient published data that relate concentrations of pCO2 and CO3 to in situ rates of reef calcification in natural settings to accurately predict the impact of elevated atmospheric pCO2 on calcification and dissolution processes. Rates of net calcification and dissolution, CO3 concentrations, and pCO2 were measured, in situ, on patch reefs, bare sand, and coral rubble on the Molokai reef flat in Hawaii. Rates of calcification ranged from 0.03 to 2.30 mmol CaCO3 m**-2 h**-1 and dissolution ranged from -0.05 to -3.3 mmol CaCO3 m**-2 h**-1. Calcification and dissolution varied diurnally with net calcification primarily occurring during the day and net dissolution occurring at night. These data were used to calculate threshold values for pCO2 and CO3 at which rates of calcification and dissolution are equivalent. Results indicate that calcification and dissolution are linearly correlated with both CO3 and pCO2. Threshold pCO2 and CO3 values for individual substrate types showed considerable variation. The average pCO2 threshold value for all substrate types was 654±195 µatm and ranged from 467 to 1003 µatm. The average CO3 threshold value was 152±24 µmol/kg, ranging from 113 to 184 µmol/kg. Ambient seawater measurements of pCO2 and CO3 indicate that CO3 and pCO2 threshold values for all substrate types were both exceeded, simultaneously, 13% of the time at present day atmospheric pCO2 concentrations. It is predicted that atmospheric pCO2 will exceed the average pCO2 threshold value for calcification and dissolution on the Molokai reef flat by the year 2100.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the participation of DAEDALUS at ImageCLEF 2011 Medical Retrieval task. We have focused on multimodal (or mixed) experiments that combine textual and visual retrieval. The main objective of our research has been to evaluate the effect on the medical retrieval process of the existence of an extended corpus that is annotated with the image type, associated to both the image itself and also to its textual description. For this purpose, an image classifier has been developed to tag each document with its class (1st level of the hierarchy: Radiology, Microscopy, Photograph, Graphic, Other) and subclass (2nd level: AN, CT, MR, etc.). For the textual-based experiments, several runs using different semantic expansion techniques have been performed. For the visual-based retrieval, different runs are defined by the corpus used in the retrieval process and the strategy for obtaining the class and/or subclass. The best results are achieved in runs that make use of the image subclass based on the classification of the sample images. Although different multimodal strategies have been submitted, none of them has shown to be able to provide results that are at least comparable to the ones achieved by the textual retrieval alone. We believe that we have been unable to find a metric for the assessment of the relevance of the results provided by the visual and textual processes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We proposed an optical communications system, based on a digital chaotic signal where the synchronization of chaos was the main objective, in some previous papers. In this paper we will extend this work. A way to add the digital data signal to be transmitted onto the chaotic signal and its correct reception, is the main objective. We report some methods to study the main characteristics of the resulting signal. The main problem with any real system is the presence of some retard between the times than the signal is generated at the emitter at the time when this signal is received. Any system using chaotic signals as a method to encrypt need to have the same characteristics in emitter and receiver. It is because that, this control of time is needed. A method to control, in real time the chaotic signals, is reported.