950 resultados para Radon transforms
Resumo:
This brochure discusses these topics concerning radon: What Is Radon?, Health Effects Of Radon, Radon Action Level, How Much Radon In A Home Is Safe?, What Do I Do If I Have A Radon Problem?, Major Radon Entry Routes, How Do I Know If I Have A Radon Problem?, Where Should I Test?, Retesting For Radon and Radon in Water.
Resumo:
This chart gives the long term effects of radon on smokers and non-smokers.
Resumo:
This is a list of some basic installation requirements and recommendations that your contractor should meet when installing a radon reduction system in your home.
Resumo:
In the rapidly growing knowledge economy, the talent and creativity of those around us will be increasingly decisive in shaping economic opportunity. Creativity can be described as the ability to produce new and original ideas and things. In other words, it is any act, idea, or product that changes an existing domain or transforms an existing domain into a new one. From an economic perspective, creativity can be considered as the generation of new ideas that is the major source of innovation and new economic activities. As urban regions have become the localities of key knowledge precincts and knowledge clusters across the globe, the link between a range of new technologies and the development of ‘creative urban regions’ (CURs) has come to the fore. In this sense, creativity has become a buzz concept in knowledge-economy research and policy circles. It has spawned ‘creative milieus,’ ‘creative industries,’ ‘creative cities,’ ‘creative class,’ and ‘creative capital.’ Hence, creativity has become a key concept on the agenda of city managers, development agents, and planners as they search for new forms of urban and economic development. CURs provide vast opportunities for knowledge production and spillover, which lead to the formation of knowledge cities. Urban information and communication technology (ICT) developments support the transformation of cities into knowledge cities. This book, which is a companion volume to Knowledge-Based Urban Development: Planning and Applications in the Information Era (also published by IGI Global) focuses on some of these developments. The Forward and Afterword are written by senior respected academic researchers Robert Stimson of the University of Queensland, Australia, and Zorica Nedovic-Budic of the University of Illinois at Urbana-Champaign, USA. The book is divided into four sections, each one dealing with selected aspects of information and communication technologies and creative urban regions.
Resumo:
Alternative sports are fast becoming the physical activity of choice. Participation rates are even outstripping more traditional activities such as golf. At their most extreme there is no second chance, the most likely outcome of a mismanaged error or accident is death. At this level participants enjoy activities such as B.A.S.E. (Buildings, Antennae, Space, Earth) jumping, big wave surfing, waterfall kayaking, extreme skiing, rope-free climbing and extreme mountaineering. Probably the most common explanation for participation in extreme sports is the notion that participation is just a matter of some people‟s need to take unnecessary risks. This study reports on findings that indicate a more positive experience. A phenomenological method was used via unstructured interviews with 15 extreme sports participants (ages 30 – 72 years) and other firsthand accounts. Extreme sport participants directly related their experience to personal transformations that spill over to life in general. Athletes report feelings of deep psychological wellbeing and meaningfulness. The extreme sport experience enables a participant to break through personal barriers and develop an understanding of their own resourcefulness and emotional, cognitive, physical and spiritual capabilities. Furthermore such a breakthrough also seems to trigger a change in personal philosophy or view on life. The extreme sport experience transforms a participant though not in terms of working towards an external (social or cultural) perception of identity or towards some constructed perception of an ideal self, but by touching something within.
Resumo:
This thesis is a study of naturally occurring radioactive materials (NORM) activity concentration, gamma dose rate and radon (222Rn) exhalation from the waste streams of large-scale onshore petroleum operations. Types of activities covered included; sludge recovery from separation tanks, sludge farming, NORM storage, scaling in oil tubulars, scaling in gas production and sedimentation in produced water evaporation ponds. Field work was conducted in the arid desert terrain of an operational oil exploration and production region in the Sultanate of Oman. The main radionuclides found were 226Ra and 210Pb (238U - series), 228Ra and 228Th (232Th - series), and 227Ac (235U - series), along with 40K. All activity concentrations were higher than the ambient soil level and varied over several orders of magnitude. The range of gamma dose rates at a 1 m height above ground for the farm treated sludge had a range of 0.06 0.43 µSv h 1, and an average close to the ambient soil mean of 0.086 ± 0.014 µSv h 1, whereas the untreated sludge gamma dose rates had a range of 0.07 1.78 µSv h 1, and a mean of 0.456 ± 0.303 µSv h 1. The geometric mean of ambient soil 222Rn exhalation rate for area surrounding the sludge was mBq m 2 s 1. Radon exhalation rates reported in oil waste products were all higher than the ambient soil value and varied over three orders of magnitude. This study resulted in some unique findings including: (i) detection of radiotoxic 227Ac in the oil scales and sludge, (ii) need of a new empirical relation between petroleum sludge activity concentrations and gamma dose rates, and (iii) assessment of exhalation of 222Rn from oil sludge. Additionally the study investigated a method to determine oil scale and sludge age by the use of inherent behaviour of radionuclides as 228Ra:226Ra and 228Th:228Ra activity ratios.
Resumo:
what was silent will speak, what is closed will open and will take on a voice Paul Virilio The fundamental problem in dealing with the digital is that we are forced to contend with a fundamental deconstruction of form. A deconstruction that renders our content and practice into a single state that can be openly and easily manipulated, reimagined and mashed together in rapid time to create completely unique artefacts and potentially unwranglable jumbles of data. Once our work is essentially broken down into this series of number sequences, (or bytes), our sound, images, movies and documents – our memory files - we are left with nothing but choice….and this is the key concern. This absence of form transforms our work into new collections and poses unique challenges for the artist seeking opportunities to exploit the potential of digital deconstruction. It is through this struggle with the absent form that we are able to thoroughly explore the latent potential of content, exploit modern abstractions of time and devise approaches within our practice that actively deal with the digital as an essential matter of course.
Resumo:
In a much anticipated judgment, the Federal Circuit has sought to clarify the standards applicable in determining whether a claimed method constitutes patent-eligible subject matter. In Bilski, the Federal Circuit identified a test to determine whether a patentee has made claims that pre-empt the use of a fundamental principle or an abstract idea or whether those claims cover only a particular application of a fundamental principle or abstract idea. It held that the sole test for determining subject matter eligibility for a claimed process under § 101 is that: (1) it is tied to a particular machine or apparatus, or (2) it transforms a particular article into a different state or thing. The court termed this the “machine-or-transformation test.” In so doing it overruled its earlier State Street decision to the extent that it deemed its “useful, tangible and concrete result” test as inadequate to determine whether an alleged invention recites patent-eligible subject matter.
Resumo:
Fracture behavior of Cu-Ni laminate composites has been investigated by tensile testing. It was found that as the individual layer thickness decreases from 100 to 20nm, the resultant fracture angle of the Cu-Ni laminate changes from 72 degrees to 50 degrees. Cross-sectional observations reveal that the fracture of the Ni layers transforms from opening to shear mode as the layer thickness decreases while that of the Cu layers keeps shear mode. Competition mechanisms were proposed to understand the variation in fracture mode of the metallic laminate composites associated with length scale.
Resumo:
This paper presents a simple and intuitive approach to determining the kinematic parameters of a serial-link robot in Denavit– Hartenberg (DH) notation. Once a manipulator’s kinematics is parameterized in this form, a large body of standard algorithms and code implementations for kinematics, dynamics, motion planning, and simulation are available. The proposed method has two parts. The first is the “walk through,” a simple procedure that creates a string of elementary translations and rotations, from the user-defined base coordinate to the end-effector. The second step is an algebraic procedure to manipulate this string into a form that can be factorized as link transforms, which can be represented in standard or modified DH notation. The method allows for an arbitrary base and end-effector coordinate system as well as an arbitrary zero joint angle pose. The algebraic procedure is amenable to computer algebra manipulation and a Java program is available as supplementary downloadable material.
Resumo:
Agriculture accounts for a significant portion of the GDP in most developed countries. However, managing farms, particularly largescale extensive farming systems, is hindered by lack of data and increasing shortage of labour. We have deployed a large heterogeneous sensor network on a working farm to explore sensor network applications that can address some of the issues identified above. Our network is solar powered and has been running for over 6 months. The current deployment consists of over 40 moisture sensors that provide soil moisture profiles at varying depths, weight sensors to compute the amount of food and water consumed by animals, electronic tag readers, up to 40 sensors that can be used to track animal movement (consisting of GPS, compass and accelerometers), and 20 sensor/actuators that can be used to apply different stimuli (audio, vibration and mild electric shock) to the animal. The static part of the network is designed for 24/7 operation and is linked to the Internet via a dedicated high-gain radio link, also solar powered. The initial goals of the deployment are to provide a testbed for sensor network research in programmability and data handling while also being a vital tool for scientists to study animal behavior. Our longer term aim is to create a management system that completely transforms the way farms are managed.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.