945 resultados para Sampling time
Resumo:
As the development of a viable quantum computer nears, existing widely used public-key cryptosystems, such as RSA, will no longer be secure. Thus, significant effort is being invested into post-quantum cryptography (PQC). Lattice-based cryptography (LBC) is one such promising area of PQC, which offers versatile, efficient, and high performance security services. However, the vulnerabilities of these implementations against side-channel attacks (SCA) remain significantly understudied. Most, if not all, lattice-based cryptosystems require noise samples generated from a discrete Gaussian distribution, and a successful timing analysis attack can render the whole cryptosystem broken, making the discrete Gaussian sampler the most vulnerable module to SCA. This research proposes countermeasures against timing information leakage with FPGA-based designs of the CDT-based discrete Gaussian samplers with constant response time, targeting encryption and signature scheme parameters. The proposed designs are compared against the state-of-the-art and are shown to significantly outperform existing implementations. For encryption, the proposed sampler is 9x faster in comparison to the only other existing time-independent CDT sampler design. For signatures, the first time-independent CDT sampler in hardware is proposed.
Comparison of emission rate values for odour and odorous chemicals derived from two sampling devices
Resumo:
Field and laboratory measurements identified a complex relationship between odour emission rates provided by the US EPA dynamic emission chamber and the University of New South Wales wind tunnel. Using a range of model compounds in an aqueous odour source, we demonstrate that emission rates derived from the wind tunnel and flux chamber are a function of the solubility of the materials being emitted, the concentrations of the materials within the liquid; and the aerodynamic conditions within the device – either velocity in the wind tunnel, or flushing rate for the flux chamber. The ratio of wind tunnel to flux chamber odour emission rates (OU m-2 s) ranged from about 60:1 to 112:1. The emission rates of the model odorants varied from about 40:1 to over 600:1. These results may provide, for the first time, a basis for the development of a model allowing an odour emission rate derived from either device to be used for odour dispersion modelling.
Resumo:
Bag sampling techniques can be used to temporarily store an aerosol and therefore provide sufficient time to utilize sensitive but slow instrumental techniques for recording detailed particle size distributions. Laboratory based assessment of the method were conducted to examine size dependant deposition loss coefficients for aerosols held in VelostatTM bags conforming to a horizontal cylindrical geometry. Deposition losses of NaCl particles in the range of 10 nm to 160 nm were analysed in relation to the bag size, storage time, and sampling flow rate. Results of this study suggest that the bag sampling method is most useful for moderately short sampling periods of about 5 minutes.
Resumo:
The broad objective of the study was to better understand anxiety among adolescents in Kolkata city, India. Specifically, the study compared anxiety across gender, school type, socio-economic background and mothers’ employment status. The study also examined adolescents’ perceptions of quality time with their parents. A group of 460 adolescents (220 boys and 240 girls), aged 13-17 years were recruited to participate in the study via a multi-stage sampling technique. The data were collected using a self-report semi-structured questionnaire and a standardized psychological test, the State-Trait Anxiety Inventory. Results show that anxiety was prevalent in the sample with 20.1% of boys and 17.9% of girls found to be suffering from high anxiety. More boys were anxious than girls (p<0.01). Adolescents from Bengali medium schools were more anxious than adolescents from English medium schools (p<0.01). Adolescents belonging to the middle class (middle socio-economic group) suffered more anxiety than those from both high and low socio-economic groups (p<0.01). Adolescents with working mothers were found to be more anxious (p<0.01). Results also show that a substantial proportion of the adolescents perceived they did not receive quality time from fathers (32.1%) and mothers (21.3%). A large number of them also did not feel comfortable to share their personal issues with their parents (60.0% for fathers and 40.0% for mothers).
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Aim: This paper is a report of a study of variations in the pattern of nurse practitioner work in a range of service fields and geographical locations, across direct patient care, indirect patient care and service-related activities. Background. The nurse practitioner role has been implemented internationally as a service reform model to improve the access and timeliness of health care. There is a substantial body of research into the nurse practitioner role and service outcomes, but scant information on the pattern of nurse practitioner work and how this is influenced by different service models. --------- Methods: We used work sampling methods. Data were collected between July 2008 and January 2009. Observations were recorded from a random sample of 30 nurse practitioners at 10-minute intervals in 2-hour blocks randomly generated to cover two weeks of work time from a sampling frame of six weeks. --------- Results: A total of 12,189 individual observations were conducted with nurse practitioners across Australia. Thirty individual activities were identified as describing nurse practitioner work, and these were distributed across three categories. Direct care accounted for 36.1% of how nurse practitioners spend their time, indirect care accounted for 32.2% and service-related activities made up 31.9%. --------- Conclusion. These findings provide useful baseline data for evaluation of nurse practitioner positions and the service effect of these positions. However, the study also raises questions about the best use of nurse practitioner time and the influences of barriers to and facilitators of this model of service innovation.
Resumo:
Ocean processes are dynamic, complex, and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path, while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show significant improvements in data resolution and path reliability compared to previously executed sampling paths used in the respective regions.
Resumo:
This paper reports the feasibility and methodological considerations of using the Short Message System Experience Sampling (SMS-ES) Method, which is an experience sampling research method developed to assist researchers to collect repeat measures of consumers’ affective experiences. The method combines SMS with web-based technology in a simple yet effective way. It is described using a practical implementation study that collected consumers’ emotions in response to using mobile phones in everyday situations. The method is further evaluated in terms of the quality of data collected in the study, as well as against the methodological considerations for experience sampling studies. These two evaluations suggest that the SMS-ES Method is both a valid and reliable approach for collecting consumers’ affective experiences. Moreover, the method can be applied across a range of for-profit and not-for-profit contexts where researchers want to capture repeated measures of consumers’ affective experiences occurring over a period of time. The benefits of the method are discussed to assist researchers who wish to apply the SMS-ES Method in their own research designs.
Resumo:
Transmission smart grids will use a digital platform for the automation of high voltage substations. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. A time synchronisation system is required for a sampled value process bus, however the details are not defined in IEC 61850-9-2. PTPv2 provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. The suitability of PTPv2 to synchronise sampling in a digital process bus is evaluated, with preliminary results indicating that steady state performance of low cost clocks is an acceptable ±300 ns, but that corrections issued by grandmaster clocks can introduce significant transients. Extremely stable grandmaster oscillators are required to ensure any corrections are sufficiently small that time synchronising performance is not degraded.
Resumo:
This paper presents the hardware development and testing of a new concept for air sampling via the integration of a prototype spore trap onboard an unmanned aerial system (UAS).We propose the integration of a prototype spore trap onboard a UAS to allow multiple capture of spores of pathogens in single remote locations at high or low altitude, otherwise not possible with stationary sampling devices.We also demonstrate the capability of this system for the capture of multiple time-stamped samples during a single mission.Wind tunnel testing was followed by simulation, and flight testing was conducted to measure and quantify the spread during simulated airborne air sampling operations. During autonomous operations, the onboard autopilot commands the servo to rotate the sampling device to a new indexed location once the UAS vehicle reaches the predefined waypoint or set of waypoints (which represents the region of interest). Time-stamped UAS data are continuously logged during the flight to assist with analysis of the particles collected. Testing and validation of the autopilot and spore trap integration, functionality, and performance is described. These tools may enhance the ability to detect new incursions of spores
Resumo:
Acoustic sensors provide an effective means of monitoring biodiversity at large spatial and temporal scales. They can continuously and passively record large volumes of data over extended periods, however these data must be analysed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced users can produce accurate results, however the time and effort required to process even small volumes of data can make manual analysis prohibitive. Our research examined the use of sampling methods to reduce the cost of analysing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilising five days of manually analysed acoustic sensor data from four sites, we examined a range of sampling rates and methods including random, stratified and biologically informed. Our findings indicate that randomly selecting 120, one-minute samples from the three hours immediately following dawn provided the most effective sampling method. This method detected, on average 62% of total species after 120 one-minute samples were analysed, compared to 34% of total species from traditional point counts. Our results demonstrate that targeted sampling methods can provide an effective means for analysing large volumes of acoustic sensor data efficiently and accurately.
Resumo:
Time series regression models were used to examine the influence of environmental factors (soil water content and soil temperature) on the emissions of nitrous oxide (N2O) from subtropical soils, by taking into account temporal lagged environmental factors, autoregressive processes, and seasonality for three horticultural crops in a subtropical region of Australia. Fluxes of N2O, soil water content, and soil temperature were determined simultaneously on a weekly basis over a 12-month period in South East Queensland. Annual N2O emissions for soils under mango, pineapple, and custard apple were 1590, 1156, and 2038 g N2O-N/ha, respectively, with most emissions attributed to nitrification. The N2O-N emitted from the pineapple and custard apple crops was equivalent to 0.26 and 2.22%, respectively, of the applied mineral N. The change in soil water content was the key variable for describing N2O emissions at the weekly time-scale, with soil temperature at a lag of 1 month having a significant influence on average N2O emissions (averaged) at the monthly time-scale across the three crops. After accounting for soil temperature and soil water content, both the weekly and monthly time series regression models exhibited significant autocorrelation at lags of 1–2 weeks and 1–2 months, and significant seasonality for weekly N2O emissions for mango crop and for monthly N2O emissions for mango and custard apple crops in this location over this time-frame. Time series regression models can explain a higher percentage of the temporal variation of N2O emission compared with simple regression models using soil temperature and soil water content as drivers. Taking into account seasonal variability and temporal persistence in N2O emissions associated with soil water content and soil temperature may lead to a reduction in the uncertainty surrounding estimates of N2O emissions based on limited sampling effort.
Resumo:
This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.
Resumo:
Acoustic sensors can be used to estimate species richness for vocal species such as birds. They can continuously and passively record large volumes of data over extended periods. These data must subsequently be analyzed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced surveyors can produce accurate results; however the time and effort required to process even small volumes of data can make manual analysis prohibitive. This study examined the use of sampling methods to reduce the cost of analyzing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilizing five days of manually analyzed acoustic sensor data from four sites, we examined a range of sampling frequencies and methods including random, stratified, and biologically informed. We found that randomly selecting 120 one-minute samples from the three hours immediately following dawn over five days of recordings, detected the highest number of species. On average, this method detected 62% of total species from 120 one-minute samples, compared to 34% of total species detected from traditional area search methods. Our results demonstrate that targeted sampling methods can provide an effective means for analyzing large volumes of acoustic sensor data efficiently and accurately. Development of automated and semi-automated techniques is required to assist in analyzing large volumes of acoustic sensor data. Read More: http://www.esajournals.org/doi/abs/10.1890/12-2088.1