878 resultados para Observational techniques and algorithms
Resumo:
Purpose: To provide a comprehensive overview of research examining the impact of astigmatism on clinical and functional measures of vision, the short and longer term adaptations to astigmatism that occur in the visual system, and the currently available clinical options for the management of patients with astigmatism. Recent findings: The presence of astigmatism can lead to substantial reductions in visual performance in a variety of clinical vision measures and functional visual tasks. Recent evidence demonstrates that astigmatic blur results in short-term adaptations in the visual system that appear to reduce the perceived impact of astigmatism on vision. In the longer term, uncorrected astigmatism in childhood can also significantly impact on visual development, resulting in amblyopia. Astigmatism is also associated with the development of spherical refractive errors. Although the clinical correction of small magnitudes of astigmatism is relatively straightforward, the precise, reliable correction of astigmatism (particularly high astigmatism) can be challenging. A wide variety of refractive corrections are now available for the patient with astigmatism, including spectacle, contact lens and surgical options. Conclusion: Astigmatism is one of the most common refractive errors managed in clinical ophthalmic practice. The significant visual and functional impacts of astigmatism emphasise the importance of its reliable clinical management. With continued improvements in ocular measurement techniques and developments in a range of different refractive correction technologies, the future promises the potential for more precise and comprehensive correction options for astigmatic patients.
Resumo:
It is well understood that that there is variation inherent in all testing techniques, and that all soil and rock materials also contain some degree of natural variability. Less consideration is normally given to variation associated with natural material heterogeneity within a site, or the relative condition of the material at the time of testing. This paper assesses the impact of spatial and temporal variability upon repeated insitu testing of a residual soil and rock profile present within a single residential site over a full calendar year, and thus range of seasonal conditions. From this repeated testing, the magnitude of spatial and temporal variation due to seasonal conditions has demonstrated that, depending on the selected location and moisture content of the subsurface at the time of testing, up to a 35% variation within the test results can be expected. The results have also demonstrated that the completed insitu test technique has a similarly large measurement and inherent variability error and, for the investigated site, up to a 60% variation in normalised results was observed. From these results, it is recommended that the frequency and timing of insitu tests should be considered when deriving geotechnical design parameters from a limited data set.
Resumo:
This paper presents an overview of the strengths and limitations of existing and emerging geophysical tools for landform studies. The objectives are to discuss recent technical developments and to provide a review of relevant recent literature, with a focus on propagating field methods with terrestrial applications. For various methods in this category, including ground-penetrating radar (GPR), electrical resistivity (ER), seismics, and electromagnetic (EM) induction, the technical backgrounds are introduced, followed by section on novel developments relevant to landform characterization. For several decades, GPR has been popular for characterization of the shallow subsurface and in particular sedimentary systems. Novel developments in GPR include the use of multi-offset systems to improve signal-to-noise ratios and data collection efficiency, amongst others, and the increased use of 3D data. Multi-electrode ER systems have become popular in recent years as they allow for relatively fast and detailed mapping. Novel developments include time-lapse monitoring of dynamic processes as well as the use of capacitively-coupled systems for fast, non-invasive surveys. EM induction methods are especially popular for fast mapping of spatial variation, but can also be used to obtain information on the vertical variation in subsurface electrical conductivity. In recent years several examples of the use of plane wave EM for characterization of landforms have been published. Seismic methods for landform characterization include seismic reflection and refraction techniques and the use of surface waves. A recent development is the use of passive sensing approaches. The use of multiple geophysical methods, which can benefit from the sensitivity to different subsurface parameters, is becoming more common. Strategies for coupled and joint inversion of complementary datasets will, once more widely available, benefit the geophysical study of landforms.Three cases studies are presented on the use of electrical and GPR methods for characterization of landforms in the range of meters to 100. s of meters in dimension. In a study of polygonal patterned ground in the Saginaw Lowlands, Michigan, USA, electrical resistivity tomography was used to characterize differences in subsurface texture and water content associated with polygon-swale topography. Also, a sand-filled thermokarst feature was identified using electrical resistivity data. The second example is on the use of constant spread traversing (CST) for characterization of large-scale glaciotectonic deformation in the Ludington Ridge, Michigan. Multiple CST surveys parallel to an ~. 60. m high cliff, where broad (~. 100. m) synclines and narrow clay-rich anticlines are visible, illustrated that at least one of the narrow structures extended inland. A third case study discusses internal structures of an eolian dune on a coastal spit in New Zealand. Both 35 and 200. MHz GPR data, which clearly identified a paleosol and internal sedimentary structures of the dune, were used to improve understanding of the development of the dune, which may shed light on paleo-wind directions.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
The competition to select a new secure hash function standard SHA-3 was initiated in response to surprising progress in the cryptanalysis of existing hash function constructions that started in 2004. In this report we survey design and cryptanalytic results of those 14 candidates that remain in the competition, about 1.5 years after the competition started with the initial submission of the candidates in October 2008. Implementation considerations are not in the scope of this report. The diversity of designs is also reflected in the great variety of cryptanalytic techniques and results that were applied and found during this time. This report gives an account of those techniques and results.
Resumo:
Industrial control systems (ICS) have been moving from dedicated communications to switched and routed corporate networks, making it probable that these devices are being exposed to the Internet. Many ICS have been designed with poor or little security features, making them vulnerable to potential attack. Recently, several tools have been developed that can scan the internet, including ZMap, Masscan and Shodan. However, little in-depth analysis has been done to compare these Internet-wide scanning techniques, and few Internet-wide scans have been conducted targeting ICS and protocols. In this paper we present a Taxonomy of Internet-wide scanning with a comparison of three popular network scanning tools, and a framework for conducting Internet-wide scans.
Resumo:
There have been substantial advances in small field dosimetry techniques and technologies, over the last decade, which have dramatically improved the achievable accuracy of small field dose measurements. This educational note aims to help radiation oncology medical physicists to apply some of these advances in clinical practice. The evaluation of a set of small field output factors (total scatter factors) is used to exemplify a detailed measurement and simulation procedure and as a basis for discussing the possible effects of simplifying that procedure. Field output factors were measured with an unshielded diode and a micro-ionisation chamber, at the centre of a set of square fields defined by a micro-multileaf collimator. Nominal field sizes investigated ranged from 6×6 to 98×98 mm2. Diode measurements in fields smaller than 30 mm across were corrected using response factors calculated using Monte Carlo simulations of the full diode geometry and daisy-chained to match micro-chamber measurements at intermediate field sizes. Diode measurements in fields smaller than 15 mm across were repeated twelve times over three separate measurement sessions, to evaluate the to evaluate the reproducibility of the radiation field size and its correspondence with the nominal field size. The five readings that contributed to each measurement on each day varied by up to 0.26%, for the “very small” fields smaller than 15 mm, and 0.18% for the fields larger than 15 mm. The diode response factors calculated for the unshielded diode agreed with previously published results, within 1.6%. The measured dimensions of the very small fields differed by up to 0.3 mm, across the different measurement sessions, contributing an uncertainty of up to 1.2% to the very small field output factors. The overall uncertainties in the field output factors were 1.8% for the very small fields and 1.1% for the fields larger than 15 mm across. Recommended steps for acquiring small field output factor measurements for use in radiotherapy treatment planning system beam configuration data are provided.
Resumo:
Initial attempts to obtain lattice based signatures were closely related to reducing a vector modulo the fundamental parallelepiped of a secret basis (like GGH [9], or NTRUSign [12]). This approach leaked some information on the secret, namely the shape of the parallelepiped, which has been exploited on practical attacks [24]. NTRUSign was an extremely efficient scheme, and thus there has been a noticeable interest on developing countermeasures to the attacks, but with little success [6]. In [8] Gentry, Peikert and Vaikuntanathan proposed a randomized version of Babai’s nearest plane algorithm such that the distribution of a reduced vector modulo a secret parallelepiped only depended on the size of the base used. Using this algorithm and generating large, close to uniform, public keys they managed to get provably secure GGH-like lattice-based signatures. Recently, Stehlé and Steinfeld obtained a provably secure scheme very close to NTRUSign [26] (from a theoretical point of view). In this paper we present an alternative approach to seal the leak of NTRUSign. Instead of modifying the lattices and algorithms used, we do a classic leaky NTRUSign signature and hide it with gaussian noise using techniques present in Lyubashevky’s signatures. Our main contributions are thus a set of strong NTRUSign parameters, obtained by taking into account latest known attacks against the scheme, a statistical way to hide the leaky NTRU signature so that this particular instantiation of CVP-based signature scheme becomes zero-knowledge and secure against forgeries, based on the worst-case hardness of the O~(N1.5)-Shortest Independent Vector Problem over NTRU lattices. Finally, we give a set of concrete parameters to gauge the efficiency of the obtained signature scheme.
Resumo:
Biomedical systems involve a large number of entities and intricate interactions between these. Their direct analysis is, therefore, difficult, and it is often necessary to rely on computational models. These models require significant resources and parallel computing solutions. These approaches are particularly suited, given parallel aspects in the nature of biomedical systems. Model hybridisation also permits the integration and simultaneous study of multiple aspects and scales of these systems, thus providing an efficient platform for multidisciplinary research.
Resumo:
The mineral coquimbite has been analysed using a range of techniques including SEM with EDX, thermal analytical techniques and Raman and infrared spectroscopy. The mineral originated from the Javier Ortega mine, Lucanas Province, Peru. The chemical formula was determined as ðFe3þ 1:37; Al0:63ÞP2:00ðSO4Þ3 9H2O. Thermal analysis showed a total mass loss of 73.4% on heating to 1000 C. A mass loss of 30.43% at 641.4 C is attributed to the loss of SO3. Observed Raman and infrared bands were assigned to the stretching and bending vibrations of sulphate tetrahedra, aluminium oxide/hydroxide octahedra, water molecules and hydroxyl ions. The Raman spectrum shows well resolved bands at 2994, 3176, 3327, 3422 and 3580 cm 1 attributed to water stretching vibrations. Vibrational spectroscopy combined with thermal analysis provides insight into the structure of coquimbite.
Resumo:
In recent years I have begun to integrate Creative Robotics into my Ecosophically-led art practices – which I have long deployed to investigate, materialise and engage thorny, ecological questions of the Anthropocene, seeking to understand how such forms of practice may promote the cultural conditions required to assure, rather than degrade, our collective futures. Many of us would instinctively conceive of robotics as an industrially driven endeavor, shaped by the pursuit of relentless efficiencies. Instead I ask through my practices, might the nascent field of Creative Robotics still be able to emerge with radically different frames of intention? Might creative practitioners still be able to shape experiences using robotic media that retain a healthy criticality towards such productivist lineages? Could this nascent form even bring forward fresh new techniques and assemblages that better encourage conversations around sustaining a future for the future, and, if so, which of its characteristics presents the greatest opportunities? I therefore ask, when Creative Robotics and Ecosophical Practice combine forces in strategic intervention, what qualities of this hybrid might best further the central aims of Ecosophical Practice – encouraging cultural conditions required to assure a future for the future?
Resumo:
Urbanisation significantly changes the characteristics of a catchment as natural areas are transformed to impervious surfaces such as roads, roofs and parking lots. The increased fraction of impervious surfaces leads to changes to the stormwater runoff characteristics, whilst a variety of anthropogenic activities common to urban areas generate a range of pollutants such as nutrients, solids and organic matter. These pollutants accumulate on catchment surfaces and are removed and trans- ported by stormwater runoff and thereby contribute pollutant loads to receiving waters. In summary, urbanisation influences the stormwater characteristics of a catchment, including hydrology and water quality. Due to the growing recognition that stormwater pollution is a significant environmental problem, the implementation of mitigation strategies to improve the quality of stormwater runoff is becoming increasingly common in urban areas. A scientifically robust stormwater quality treatment strategy is an essential requirement for effective urban stormwater management. The efficient design of treatment systems is closely dependent on the state of knowledge in relation to the primary factors influencing stormwater quality. In this regard, stormwater modelling outcomes provide designers with important guidance and datasets which significantly underpin the design of effective stormwater treatment systems. Therefore, the accuracy of modelling approaches and the reliability modelling outcomes are of particular concern. This book discusses the inherent complexity and key characteristics in the areas of urban hydrology and stormwater quality, based on the influence exerted by a range of rainfall and catchment characteristics. A comprehensive field sampling and testing programme in relation to pollutant build-up, an urban catchment monitoring programme in relation to stormwater quality and the outcomes from advanced statistical analyses provided the platform for the knowledge creation. Two case studies and two real-world applications are discussed to illustrate the translation of the knowledge created to practical use in relation to the role of rainfall and catchment characteristics on urban stormwater quality. An innovative rainfall classification based on stormwater quality was developed to support the effective and scientifically robust design of stormwater treatment systems. Underpinned by the rainfall classification methodology, a reliable approach for design rainfall selection is proposed in order to optimise stormwater treatment based on both, stormwater quality and quantity. This is a paradigm shift from the common approach where stormwater treatment systems are designed based solely on stormwater quantity data. Additionally, how pollutant build-up and stormwater runoff quality vary with a range of catchment characteristics was also investigated. Based on the study out- comes, it can be concluded that the use of only a limited number of catchment parameters such as land use and impervious surface percentage, as it is the case in current modelling approaches, could result in appreciable error in water quality estimation. Influential factors which should be incorporated into modelling in relation to catchment characteristics, should also include urban form and impervious surface area distribution. The knowledge created through the research investigations discussed in this monograph is expected to make a significant contribution to engineering practice such as hydrologic and stormwater quality modelling, stormwater treatment design and urban planning, as the study outcomes provide practical approaches and recommendations for urban stormwater quality enhancement. Furthermore, this monograph also demonstrates how fundamental knowledge of stormwater quality processes can be translated to provide guidance on engineering practice, the comprehensive application of multivariate data analyses techniques and a paradigm on integrative use of computer models and mathematical models to derive practical outcomes.
Resumo:
Research publication is one of the final steps in the research process, which begins with development of a research idea. Moving through the process of bringing together collaborators, design of the study protocol, securing of grant or study funding, and obtaining ethic(s) approval to conduct the research, and implementation of the research, analysis and drawing of conclusions based on the data leads to publication of the study results. Although a final step in the research process entails dissemination of the results, many studies go unreported or are improperly reported. Indeed, reviewers have suggested that many randomized controlled trials, observational studies, and qualitative studies lack crucial methodological features or details that lend credibility to study results (Simera et al., 2010).
Resumo:
Purpose Following the perspective of frustration theory customer frustration incidents lead to frustration behavior such as protest (negative word‐of‐mouth). On the internet customers can express their emotions verbally and non‐verbally in numerous web‐based review platforms. The purpose of this study is to investigate online dysfunctional customer behavior, in particular negative “word‐of‐web” (WOW) in online feedback forums, among customers who participate in frequent‐flier programs in the airline industry. Design/methodology/approach The study employs a variation of the critical incident technique (CIT) referred to as the critical internet feedback technique (CIFT). Qualitative data of customer reviews of 13 different frequent‐flier programs posted on the internet were collected and analyzed with regard to frustration incidents, verbal and non‐verbal emotional effects and types of dysfunctional word‐of‐web customer behavior. The sample includes 141 negative customer reviews based on non‐recommendations and low program ratings. Findings Problems with loyalty programs evoke negative emotions that are expressed in a spectrum of verbal and non‐verbal negative electronic word‐of‐mouth. Online dysfunctional behavior can vary widely from low ratings and non‐recommendations to voicing switching intentions to even stronger forms such as manipulation of others and revenge intentions. Research limitations/implications Results have to be viewed carefully due to methodological challenges with regard to the measurement of emotions, in particular the accuracy of self‐report techniques and the quality of online data. Generalization of the results is limited because the study utilizes data from only one industry. Further research is needed with regard to the exact differentiation of frustration from related constructs. In addition, large‐scale quantitative studies are necessary to specify and test the relationships between frustration incidents and subsequent dysfunctional customer behavior expressed in negative word‐of‐web. Practical implications The study yields important implications for the monitoring of the perceived quality of loyalty programs. Management can obtain valuable information about program‐related and/or relationship‐related frustration incidents that lead to online dysfunctional customer behavior. A proactive response strategy should be developed to deal with severe cases, such as sabotage plans. Originality/value This study contributes to knowledge regarding the limited research of online dysfunctional customer behavior as well as frustration incidents of loyalty programs. Also, the article presents a theoretical “customer frustration‐defection” framework that describes different levels of online dysfunctional behavior in relation to the level of frustration sensation that customers have experienced. The framework extends the existing perspective of the “customer satisfaction‐loyalty” framework developed by Heskett et al.