932 resultados para VERIFICATION
Resumo:
In this article, the machining conditions to achieve nanometric surface roughness in finish cut microelectrodischarge milling were investigated. For a constant gap voltage, the effect of feed rate and capacitance was studied on average surface roughness (Ra) and maximum peak-to-valley roughness height (Ry). Statistical models were developed using a three-level, two-factor experimental design. The developed models minimized Ra and Ry by desirability function approach. Maximum desirability was found to be more than 98%. The minimum values of Ra and Ry were 23 and 173 nm, respectively, for 1.00 μm s-1 feed rate and 0.01 nF capacitance. Verification experiments were conducted to check the accuracy of the models, where the responses were found to be very close to the predicted values. Thus, the developed models can be used to generate nanometric level surface finish, which are useful for many applications in microelectromechanical systems.
Resumo:
This paper investigated the influence of three micro electrodischarge milling process parameters, which were feed rate, capacitance, and voltage. The response variables were average surface roughness (R a ), maximum peak-to-valley roughness height (R y ), tool wear ratio (TWR), and material removal rate (MRR). Statistical models of these output responses were developed using three-level full factorial design of experiment. The developed models were used for multiple-response optimization by desirability function approach to obtain minimum R a , R y , TWR, and maximum MRR. Maximum desirability was found to be 88%. The optimized values of R a , R y , TWR, and MRR were 0.04, 0.34 μm, 0.044, and 0.08 mg min−1, respectively for 4.79 μm s−1 feed rate, 0.1 nF capacitance, and 80 V voltage. Optimized machining parameters were used in verification experiments, where the responses were found very close to the predicted values.
Resumo:
Tidal turbines have been tested extensively at many scales in steady state flow. Testing medium- or full-scale devices in turbulent flow has been less thoroughly examined. The differences between turbine performances in these two different states are needed for testing method verification and numerical model validation. The work in this paper documents the performance of a 1/10 scale turbine in steady state pushing tests and tidal moored tests. The overall performance of the device appears to decrease with turbulent flow, though there is increased data scatter and therefore, reduced uncertainty. At maximum power performance, as velocity increases the mechanical power and electrical power reduction from steady to unsteady flow increases. The drive train conversion efficiency also decreases. This infers that the performance for this turbine design is affected by the presence of turbulent flow.
Resumo:
This paper argues that biometric verification evaluations can obscure vulnerabilities that increase the chances that an attacker could be falsely accepted. This can occur because existing evaluations implicitly assume that an imposter claiming a false identity would claim a random identity rather than consciously selecting a target to impersonate. This paper shows how an attacker can select a target with a similar biometric signature in order to increase their chances of false acceptance. It demonstrates this effect using a publicly available iris recognition algorithm. The evaluation shows that the system can be vulnerable to attackers targeting subjects who are enrolled with a smaller section of iris due to occlusion. The evaluation shows how the traditional DET curve analysis conceals this vulnerability. As a result, traditional analysis underestimates the importance of an existing score normalisation method for addressing occlusion. The paper concludes by evaluating how the targeted false acceptance rate increases with the number of available targets. Consistent with a previous investigation of targeted face verification performance, the experiment shows that the false acceptance rate can be modelled using the traditional FAR measure with an additional term that is proportional to the logarithm of the number of available targets.
Resumo:
When applying biometric algorithms to forensic verification, false acceptance and false rejection can mean a failure to identify a criminal, or worse, lead to the prosecution of individuals for crimes they did not commit. It is therefore critical that biometric evaluations be performed as accurately as possible to determine their legitimacy as a forensic tool. This paper argues that, for forensic verification scenarios, traditional performance measures are insufficiently accurate. This inaccuracy occurs because existing verification evaluations implicitly assume that an imposter claiming a false identity would claim a random identity rather than consciously selecting a target to impersonate. In addition to describing this new vulnerability, the paper describes a novel Targeted.. FAR metric that combines the traditional False Acceptance Rate (FAR) measure with a term that indicates how performance degrades with the number of potential targets. The paper includes an evaluation of the effects of targeted impersonation on an existing academic face verification system. This evaluation reveals that even with a relatively small number of targets false acceptance rates can increase significantly, making the analysed biometric systems unreliable.
Resumo:
System efficiency and cost effectiveness are of critical importance for photovoltaic (PV) systems. This paper addresses the two issues by developing a novel three-port DC-DC converter for stand-alone PV systems, based on an improved Flyback-Forward topology. It provides a compact single-unit solution with a combined feature of optimized maximum power point tracking (MPPT), high step-up ratio, galvanic isolation and multiple operating modes for domestic and aerospace applications. A theoretical analysis is conducted to analyze the operating modes followed by simulation and experimental work. The paper is focused on a comprehensive modulation strategy utilizing both PWM and phase-shifted control that satisfies the requirement of PV power systems to achieve MPPT and output voltage regulation. A 250 W converter was designed and prototyped to provide experimental verification in term of system integration and high conversion efficiency.
Resumo:
With security and surveillance, there is an increasing need to be able to process image data efficiently and effectively either at source or in a large data networks. Whilst Field Programmable Gate Arrays have been seen as a key technology for enabling this, they typically use high level and/or hardware description language synthesis approaches; this provides a major disadvantage in terms of the time needed to design or program them and to verify correct operation; it considerably reduces the programmability capability of any technique based on this technology. The work here proposes a different approach of using optimised soft-core processors which can be programmed in software. In particular, the paper proposes a design tool chain for programming such processors that uses the CAL Actor Language as a starting point for describing an image processing algorithm and targets its implementation to these custom designed, soft-core processors on FPGA. The main purpose is to exploit the task and data parallelism in order to achieve the same parallelism as a previous HDL implementation but avoiding the design time, verification and debugging steps associated with such approaches.
Resumo:
Increasingly providers of mental health nurse education are required to demonstrate user involvement in all aspects of these programmes including student selection, programme design and student assessment. There has been limited analysis of how nursing students perceive user involvement in nurse education programmes. The aim of this study has been to explore mental health nursing student’s perceptions of involving users in all aspects of pre-registration mental health nursing programme. Researchers completed a number of focus group interviews with 12 ex-mental health nursing students who had been recruited by purposeful sampling. Each focus group interview was recorded and analysed using a series of data reduction, data display and verification
methods. The study confirms many of the findings reported in earlier user participation in education studies. Three main themes related to user involvement have been identified: the protection of users, enhanced student learning and the added value benefits associated with user involvement.
Resumo:
The motivation for this study was to reduce physics workload relating to patient- specific quality assurance (QA). VMAT plan delivery accuracy was determined from analysis of pre- and on-treatment trajectory log files and phantom-based ionization chamber array measurements. The correlation in this combination of measurements for patient-specific QA was investigated. The relationship between delivery errors and plan complexity was investigated as a potential method to further reduce patient-specific QA workload. Thirty VMAT plans from three treatment sites - prostate only, prostate and pelvic node (PPN), and head and neck (H&N) - were retrospectively analyzed in this work. The 2D fluence delivery reconstructed from pretreatment and on-treatment trajectory log files was compared with the planned fluence using gamma analysis. Pretreatment dose delivery verification was also car- ried out using gamma analysis of ionization chamber array measurements compared with calculated doses. Pearson correlations were used to explore any relationship between trajectory log file (pretreatment and on-treatment) and ionization chamber array gamma results (pretreatment). Plan complexity was assessed using the MU/ arc and the modulation complexity score (MCS), with Pearson correlations used to examine any relationships between complexity metrics and plan delivery accu- racy. Trajectory log files were also used to further explore the accuracy of MLC and gantry positions. Pretreatment 1%/1 mm gamma passing rates for trajectory log file analysis were 99.1% (98.7%-99.2%), 99.3% (99.1%-99.5%), and 98.4% (97.3%-98.8%) (median (IQR)) for prostate, PPN, and H&N, respectively, and were significantly correlated to on-treatment trajectory log file gamma results (R = 0.989, p < 0.001). Pretreatment ionization chamber array (2%/2 mm) gamma results were also significantly correlated with on-treatment trajectory log file gamma results (R = 0.623, p < 0.001). Furthermore, all gamma results displayed a significant correlation with MCS (R > 0.57, p < 0.001), but not with MU/arc. Average MLC position and gantry angle errors were 0.001 ± 0.002 mm and 0.025° ± 0.008° over all treatment sites and were not found to affect delivery accuracy. However, vari- ability in MLC speed was found to be directly related to MLC position accuracy. The accuracy of VMAT plan delivery assessed using pretreatment trajectory log file fluence delivery and ionization chamber array measurements were strongly correlated with on-treatment trajectory log file fluence delivery. The strong corre- lation between trajectory log file and phantom-based gamma results demonstrates potential to reduce our current patient-specific QA. Additionally, insight into MLC and gantry position accuracy through trajectory log file analysis and the strong cor- relation between gamma analysis results and the MCS could also provide further methodologies to both optimize the VMAT planning and QA process.
Resumo:
The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.
Resumo:
1. The prediction and mapping of climate in areas between climate stations is of increasing importance in ecology.
2. Four categories of model, simple interpolation, thin plate splines, multiple linear regression and mixed spline-regression, were tested for their ability to predict the spatial distribution of temperature on the British mainland. The models were tested by external cross-verification.
3. The British distribution of mean daily temperature was predicted with the greatest accuracy by using a mixed model: a thin plate spline fitted to the surface of the country, after correction of the data by a selection from 16 independent topographical variables (such as altitude, distance from the sea, slope and topographic roughness), chosen by multiple regression from a digital terrain model (DTM) of the country.
4. The next most accurate method was a pure multiple regression model using the DTM. Both regression and thin plate spline models based on a few variables (latitude, longitude and altitude) only were comparatively unsatisfactory, but some rather simple methods of surface interpolation (such as bilinear interpolation after correction to sea level) gave moderately satisfactory results. Differences between the methods seemed to be dependent largely on their ability to model the effect of the sea on land temperatures.
5. Prediction of temperature by the best methods was greater than 95% accurate in all months of the year, as shown by the correlation between the predicted and actual values. The predicted temperatures were calculated at real altitudes, not subject to sea-level correction.
6. A minimum of just over 30 temperature recording stations would generate a satisfactory surface, provided the stations were well spaced.
7. Maps of mean daily temperature, using the best overall methods are provided; further important variables, such as continentality and length of growing season, were also mapped. Many of these are believed to be the first detailed representations at real altitude.
8. The interpolated monthly temperature surfaces are available on disk.
Resumo:
It is acknowledged that one of the consequences of the ageing process is cognitive decline, which leads to an increase in the incidence of illnesses such as dementia. This has become ever more relevant due to the projected increase in the ageing demographic. Dementia affects visuo-spatial perception, causing difficulty with wayfinding, even during the early stages of the disease. The literature widely recognises the physical environment’s role in alleviating symptoms of dementia and improving quality of life for residents. It also identifies the lack of available housing options for older people with dementia and consequently the current stock is ill-equipped to provide adequate support.
Recent statistics indicate that 80% of those residing in nursing or residential care homes have some form of dementia or severe memory problems. The shift towards institutional care settings, the need for specialist support and care, places a greater impetus on the need for a person-centred approach to tackle issues related to wayfinding and dementia.
This thesis therefore aims to improve design for dementia in nursing and residential care settings in the context of Northern Ireland. This will be undertaken in order to provide a better understanding of how people with dementia experience the physical environment and to highlight features of the design that assist with wayfinding. Currently there are limited guidelines on design for dementia, meaning that many of these are theoretical, anecdotal and not definitive. Hence a greater verification to address the less recognised design issues is required. This is intended to ultimately improve quality of life, wellbeing, independence and uphold the dignity of people with dementia living in nursing or residential care homes.
The research design uses a mixed methods approach. A thorough preparation and consideration of ethical issues informed the methodology. The various facets were also trialled and piloted to identify any ethical, technological, methodological, data collection and analysis issues. The protocol was then amended to improve or resolve any of the aforementioned issues. Initially a questionnaire based on leading design recommendations was conducted with home managers. Semi-structured interviews were developed from this and conducted with staff and resident’s next of kin. An evidence-based approach was used to design a study which used ethnographic methods, including a wayfinding task. This followed a repeated measures design which would be used to actively engage residents with dementia in the research. Complementary to the wayfinding task, conversational and semi-structured interviews were used to promote dialogue and direct responses with the person with dementia. In addition to this, Space Syntax methodologies were used to examine the physical properties of the architectural layout. This was then cross-examined with interview responses and data from the wayfinding tasks.
A number of plan typologies were identified and were determined as synonymous with decision point types which needed to be made during the walks. The empirical work enabled the synthesis of environmental features which support wayfinding.
Results indicate that particular environmental features are associated with improved performance on the wayfinding tasks. By enhancing design for dementia, through identifying the attributes, challenges with wayfinding may be overcome and the benefits of the physical environment can be seen to promote wellbeing.
The implications of this work mean that the environmental features which have been highlighted from the project can be used to inform guidelines, thus adding to existing knowledge. Future work would involve the dissemination of this information and the potential for it to be made into design standards or regulations which champion design for dementia. These would increase awareness for designers and stakeholders undertaking new projects, extensions or refurbishments.
A person-centred, evidence-based design was emphasised throughout the project which guaranteed an in-depth study. There were limitations due to the available resources, time and funding. Future research would involve testing the identified environmental features within a specific environment to enable measured observation of improvements.
Resumo:
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.
Resumo:
In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.
Resumo:
In this paper, novel closed-form expressions for the level crossing rate and average fade duration of κ − μ shadowed fading channels are derived. The new equations provide the capability of modeling the correlation between the time derivative of the shadowed dominant and multipath components of the κ − μ shadowed fading envelope. Verification of the new equations is performed by reduction to a number of known special cases. It is shown that as the shadowing of the resultant dominant component decreases, the signal crosses lower threshold levels at a reduced rate. Furthermore, the impact of increasing correlation between the slope of the shadowed dominant and multipath components similarly acts to reduce crossings at lower signal levels. The new expressions for the second-order statistics are also compared with field measurements obtained for cellular device-to-device and body-centric communication channels, which are known to be susceptible to shadowed fading.