998 resultados para Number projection


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A newly developed computational approach is proposed in the paper for the analysis of multiple crack problems based on the eigen crack opening displacement (COD) boundary integral equations. The eigen COD particularly refers to a crack in an infinite domain under fictitious traction acting on the crack surface. With the concept of eigen COD, the multiple cracks in great number can be solved by using the conventional displacement discontinuity boundary integral equations in an iterative fashion with a small size of system matrix to determine all the unknown CODs step by step. To deal with the interactions among cracks for multiple crack problems, all cracks in the problem are divided into two groups, namely the adjacent group and the far-field group, according to the distance to the current crack in consideration. The adjacent group contains cracks with relatively small distances but strong effects to the current crack, while the others, the cracks of far-field group are composed of those with relatively large distances. Correspondingly, the eigen COD of the current crack is computed in two parts. The first part is computed by using the fictitious tractions of adjacent cracks via the local Eshelby matrix derived from the traction boundary integral equations in discretized form, while the second part is computed by using those of far-field cracks so that the high computational efficiency can be achieved in the proposed approach. The numerical results of the proposed approach are compared not only with those using the dual boundary integral equations (D-BIE) and the BIE with numerical Green's functions (NGF) but also with those of the analytical solutions in literature. The effectiveness and the efficiency of the proposed approach is verified. Numerical examples are provided for the stress intensity factors of cracks, up to several thousands in number, in both the finite and infinite plates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers the problem of reconstructing the motion of a 3D articulated tree from 2D point correspondences subject to some temporal prior. Hitherto, smooth motion has been encouraged using a trajectory basis, yielding a hard combinatorial problem with time complexity growing exponentially in the number of frames. Branch and bound strategies have previously attempted to curb this complexity whilst maintaining global optimality. However, they provide no guarantee of being more efficient than exhaustive search. Inspired by recent work which reconstructs general trajectories using compact high-pass filters, we develop a dynamic programming approach which scales linearly in the number of frames, leveraging the intrinsically local nature of filter interactions. Extension to affine projection enables reconstruction without estimating cameras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has not yet been established whether the spatial variation of particle number concentration (PNC) within a microscale environment can have an effect on exposure estimation results. In general, the degree of spatial variation within microscale environments remains unclear, since previous studies have only focused on spatial variation within macroscale environments. The aims of this study were to determine the spatial variation of PNC within microscale school environments, in order to assess the importance of the number of monitoring sites on exposure estimation. Furthermore, this paper aims to identify which parameters have the largest influence on spatial variation, as well as the relationship between those parameters and spatial variation. Air quality measurements were conducted for two consecutive weeks at each of the 25 schools across Brisbane, Australia. PNC was measured at three sites within the grounds of each school, along with the measurement of meteorological and several other air quality parameters. Traffic density was recorded for the busiest road adjacent to the school. Spatial variation at each school was quantified using coefficient of variation (CV). The portion of CV associated with instrument uncertainty was found to be 0.3 and therefore, CV was corrected so that only non-instrument uncertainty was analysed in the data. The median corrected CV (CVc) ranged from 0 to 0.35 across the schools, with 12 schools found to exhibit spatial variation. The study determined the number of required monitoring sites at schools with spatial variability and tested the deviation in exposure estimation arising from using only a single site. Nine schools required two measurement sites and three schools required three sites. Overall, the deviation in exposure estimation from using only one monitoring site was as much as one order of magnitude. The study also tested the association of spatial variation with wind speed/direction and traffic density, using partial correlation coefficients to identify sources of variation and non-parametric function estimation to quantify the level of variability. Traffic density and road to school wind direction were found to have a positive effect on CVc, and therefore, also on spatial variation. Wind speed was found to have a decreasing effect on spatial variation when it exceeded a threshold of 1.5 (m/s), while it had no effect below this threshold. Traffic density had a positive effect on spatial variation and its effect increased until it reached a density of 70 vehicles per five minutes, at which point its effect plateaued and did not increase further as a result of increasing traffic density.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This mathematics education research provides significant insights for the teaching of decimals to children. It is well known that decimals is one of the most difficult topics to learn and teach. Annette’s research is unique in that it focuses not only on the cognitive, but also on the affective and conative aspects of learning and teaching of decimals. The study is innovative as it includes the students as co-constructors and co-researchers. The findings open new ways of thinking for educators about how students cognitively process decimal knowledge, as well as how students might develop a sense of self as a learner, teacher and researcher in mathematics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple sclerosis (MS) is a common chronic inflammatory disease of the central nervous system. Susceptibility to the disease is affected by both environmental and genetic factors. Genetic factors include haplotypes in the histocompatibility complex (MHC) and over 50 non-MHC loci reported by genome-wide association studies. Amongst these, we previously reported polymorphisms in chromosome 12q13-14 with a protective effect in individuals of European descent. This locus spans 288 kb and contains 17 genes, including several candidate genes which have potentially significant pathogenic and therapeutic implications. In this study, we aimed to fine-map this locus. We have implemented a two-phase study: a variant discovery phase where we have used next-generation sequencing and two target-enrichment strategies [long-range polymerase chain reaction (PCR) and Nimblegen's solution phase hybridization capture] in pools of 25 samples; and a genotyping phase where we genotyped 712 variants in 3577 healthy controls and 3269 MS patients. This study confirmed the association (rs2069502, P = 9.9 × 10−11, OR = 0.787) and narrowed down the locus of association to an 86.5 kb region. Although the study was unable to pinpoint the key-associated variant, we have identified a 42 (genotyped and imputed) single-nucleotide polymorphism haplotype block likely to harbour the causal variant. No evidence of association at previously reported low-frequency variants in CYP27B1 was observed. As part of the study we compared variant discovery performance using two target-enrichment strategies. We concluded that our pools enriched with Nimblegen's solution phase hybridization capture had better sensitivity to detect true variants than the pools enriched with long-range PCR, whilst specificity was better in the long-range PCR-enriched pools compared with solution phase hybridization capture enriched pools; this result has important implications for the design of future fine-mapping studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis developed semi-parametric regression models for estimating the spatio-temporal distribution of outdoor airborne ultrafine particle number concentration (PNC). The models developed incorporate multivariate penalised splines and random walks and autoregressive errors in order to estimate non-linear functions of space, time and other covariates. The models were applied to data from the "Ultrafine Particles from Traffic Emissions and Child" project in Brisbane, Australia, and to longitudinal measurements of air quality in Helsinki, Finland. The spline and random walk aspects of the models reveal how the daily trend in PNC changes over the year in Helsinki and the similarities and differences in the daily and weekly trends across multiple primary schools in Brisbane. Midday peaks in PNC in Brisbane locations are attributed to new particle formation events at the Port of Brisbane and Brisbane Airport.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Individual variability in the acquisition, consolidation and extinction of conditioned fear potentially contributes to the development of fear pathology including posttraumatic stress disorder (PTSD). Pavlovian fear conditioning is a key tool for the study of fundamental aspects of fear learning. Here, we used a selected mouse line of High and Low Pavlovian conditioned fear created from an advanced intercrossed line (AIL) in order to begin to identify the cellular basis of phenotypic divergence in Pavlovian fear conditioning. We investigated whether phosphorylated MAPK (p44/42 ERK/MAPK), a protein kinase required in the amygdala for the acquisition and consolidation of Pavlovian fear memory, is differentially expressed following Pavlovian fear learning in the High and Low fear lines. We found that following Pavlovian auditory fear conditioning, High and Low line mice differ in the number of pMAPK-expressing neurons in the dorsal sub nucleus of the lateral amygdala (LAd). In contrast, this difference was not detected in the ventral medial (LAvm) or ventral lateral (LAvl) amygdala sub nuclei or in control animals. We propose that this apparent increase in plasticity at a known locus of fear memory acquisition and consolidation relates to intrinsic differences between the two fear phenotypes. These data provide important insights into the micronetwork mechanisms encoding phenotypic differences in fear. Understanding the circuit level cellular and molecular mechanisms that underlie individual variability in fear learning is critical for the development of effective treatment of fear-related illnesses such as PTSD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A newspaper numbers game based on simple arithmetic relationships is discussed. Its potential to give students of elementary algebra practice in semi-ad hoc reasoning and to build general arithmetic reasoning skills is explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Number theory has in recent decades assumed a great practical importance, due primarily to its application to cryptography. This chapter discusses how elementary concepts of number theory may be illuminated and made accessible to upper secondary school students via appropriate spreadsheet models. In such environments, students can observe patterns, gain structural insight, form and test conjectures, and solve problems. The chapter begins by reviewing literature on the use of spreadsheets in general and the use of spreadsheets in number theory in particular. Two sample applications are then discussed. The first, factoring factorials, is presented and instructions are given to construct a model in Excel 2007. The second application, the RSA cryptosystem, is included because of its importance to Science, Technology, Engineering, and Mathematics (STEM) students. Number theoretic concepts relevant to RSA are discussed, and an outline of RSA. is given, with example. The chapter ends with instructions on how to construct a simple spreadsheet illustrating RSA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIMS The aims of the study are to characterize changes in JK-1 (FAM134B) at the DNA level in colorectal adenocarcinoma and adenoma and exploring the possible correlations with clinical and pathological features. METHOD JK-1 gene DNA copy number changes were studied in 211 colorectal carcinomas, 32 colorectal adenoma and 20 colorectal non-cancer colorectal tissue samples by real-time quantitative polymerase chain reaction. The results were correlated with clinical and pathological parameters. RESULTS Colorectal adenomas were more likely to be amplified than deleted with regard to JK-1 (FAM134B) DNA copy number change. The copy number level of JK-1 (FAM134B) DNA in colorectal adenocarcinomas was significantly lower in comparison to colorectal adenomas. Changes in JK-1 (FAM134B) DNA copy number were associated with histological subtypes, and cancer stage. Lower copy numbers were associated with higher tumor stage, lymph node stage and overall pathological stage of cancer. Conversely, higher DNA copy numbers were detected more often in the mucinous adenocarcinoma. CONCLUSIONS This is the first study showing significant correlations of the JK-1 (FAM134B) gene copy number alterations with clinical and pathological features in a large cohort of pre-invasive and invasive colorectal malignancies. The changes in DNA copy number associated with progression of colorectal malignancies reflect that JK-1 (FAM134B) gene could play a role in controlling some steps in development of the invasive phenotypes.