469 resultados para Probable Number Technique


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Number lines are part of our everyday life (e.g., thermometers, kitchen scales) and are frequently used in primary mathematics as instructional aids, in texts and for assessment purposes on mathematics tests. There are two major types of number lines; structured number lines, which are the focus of this paper, and empty number lines. Structured number lines represent mathematical information by the placement of marks on a horizontal or vertical line which has been marked into proportional segments (Figure 1). Empty number lines are blank lines which students can use for calculations (Figure 2) and are not discussed further here (see van den Heuvel-Panhuizen, 2008, on the role of empty number lines). In this article, we will focus on how students’ knowledge of the structured number line develops and how they become successful users of this mathematical tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prostate cancer is the second most common cause of cancer-related deaths in Western males. Current diagnostic, prognostic and treatment approaches are not ideal and advanced metastatic prostate cancer is incurable. There is an urgent need for improved adjunctive therapies and markers for this disease. GPCRs are likely to play a significant role in the initiation and progression of prostate cancer. Over the last decade, it has emerged that G protein coupled receptors (GPCRs) are likely to function as homodimers and heterodimers. Heterodimerisation between GPCRs can result in the formation of novel pharmacological receptors with altered functional outcomes, and a number of GPCR heterodimers have been implicated in the pathogenesis of human disease. Importantly, novel GPCR heterodimers represent potential new targets for the development of more specific therapeutic drugs. Ghrelin is a 28 amino acid peptide hormone which has a unique n-octanoic acid post-translational modification. Ghrelin has a number of important physiological roles, including roles in appetite regulation and the stimulation of growth hormone release. The ghrelin receptor is the growth hormone secretagogue receptor type 1a, GHS-R1a, a seven transmembrane domain GPCR, and GHS-R1b is a C-terminally truncated isoform of the ghrelin receptor, consisting of five transmembrane domains. Growing evidence suggests that ghrelin and the ghrelin receptor isoforms, GHS-R1a and GHS-R1b, may have a role in the progression of a number of cancers, including prostate cancer. Previous studies by our research group have shown that the truncated ghrelin receptor isoform, GHS-R1b, is not expressed in normal prostate, however, it is expressed in prostate cancer. The altered expression of this truncated isoform may reflect a difference between a normal and cancerous state. A number of mutant GPCRs have been shown to regulate the function of their corresponding wild-type receptors. Therefore, we investigated the potential role of interactions between GHS-R1a and GHS-R1b, which are co-expressed in prostate cancer and aimed to investigate the function of this potentially new pharmacological receptor. In 2005, obestatin, a 23 amino acid C-terminally amidated peptide derived from preproghrelin was identified and was described as opposing the stimulating effects of ghrelin on appetite and food intake. GPR39, an orphan GPCR which is closely related to the ghrelin receptor, was identified as the endogenous receptor for obestatin. Recently, however, the ability of obestatin to oppose the effects of ghrelin on appetite and food intake has been questioned, and furthermore, it appears that GPR39 may in fact not be the obestatin receptor. The role of GPR39 in the prostate is of interest, however, as it is a zinc receptor. Zinc has a unique role in the biology of the prostate, where it is normally accumulated at high levels, and zinc accumulation is altered in the development of prostate malignancy. Ghrelin and zinc have important roles in prostate cancer and dimerisation of their receptors may have novel roles in malignant prostate cells. The aim of the current study, therefore, was to demonstrate the formation of GHS-R1a/GHS-R1b and GHS-R1a/GPR39 heterodimers and to investigate potential functions of these heterodimers in prostate cancer cell lines. To demonstrate dimerisation we first employed a classical co-immunoprecipitation technique. Using cells co-overexpressing FLAG- and Myc- tagged GHS-R1a, GHS-R1b and GPR39, we were able to co-immunoprecipitate these receptors. Significantly, however, the receptors formed high molecular weight aggregates. A number of questions have been raised over the propensity of GPCRs to aggregate during co-immunoprecipitation as a result of their hydrophobic nature and this may be misinterpreted as receptor dimerisation. As we observed significant receptor aggregation in this study, we used additional methods to confirm the specificity of these putative GPCR interactions. We used two different resonance energy transfer (RET) methods; bioluminescence resonance energy transfer (BRET) and fluorescence resonance energy transfer (FRET), to investigate interactions between the ghrelin receptor isoforms and GPR39. RET is the transfer of energy from a donor fluorophore to an acceptor fluorophore when they are in close proximity, and RET methods are, therefore, applicable to the observation of specific protein-protein interactions. Extensive studies using the second generation bioluminescence resonance energy transfer (BRET2) technology were performed, however, a number of technical limitations were observed. The substrate used during BRET2 studies, coelenterazine 400a, has a low quantum yield and rapid signal decay. This study highlighted the requirement for the expression of donor and acceptor tagged receptors at high levels so that a BRET ratio can be determined. After performing a number of BRET2 experimental controls, our BRET2 data did not fit the predicted results for a specific interaction between these receptors. The interactions that we observed may in fact represent ‘bystander BRET’ resulting from high levels of expression, forcing the donor and acceptor into close proximity. Our FRET studies employed two different FRET techniques, acceptor photobleaching FRET and sensitised emission FRET measured by flow cytometry. We were unable to observe any significant FRET, or FRET values that were likely to result from specific receptor dimerisation between GHS-R1a, GHS-R1b and GPR39. While we were unable to conclusively demonstrate direct dimerisation between GHS-R1a, GHS-R1b and GPR39 using several methods, our findings do not exclude the possibility that these receptors interact. We aimed to investigate if co-expression of combinations of these receptors had functional effects in prostate cancers cells. It has previously been demonstrated that ghrelin stimulates cell proliferation in prostate cancer cell lines, through ERK1/2 activation, and GPR39 can stimulate ERK1/2 signalling in response to zinc treatments. Additionally, both GHS-R1a and GPR39 display a high level of constitutive signalling and these constitutively active receptors can attenuate apoptosis when overexpressed individually in some cell types. We, therefore, investigated ERK1/2 and AKT signalling and cell survival in prostate cancer the potential modulation of these functions by dimerisation between GHS-R1a, GHS-R1b and GPR39. Expression of these receptors in the PC-3 prostate cancer cell line, either alone or in combination, did not alter constitutive ERK1/2 or AKT signalling, basal apoptosis or tunicamycin-stimulated apoptosis, compared to controls. In summary, the potential interactions between the ghrelin receptor isoforms, GHS-R1a and GHS-R1b, and the related zinc receptor, GPR39, and the potential for functional outcomes in prostate cancer were investigated using a number of independent methods. We did not definitively demonstrate the formation of these dimers using a number of state of the art methods to directly demonstrate receptor-receptor interactions. We investigated a number of potential functions of GPR39 and GHS-R1a in the prostate and did not observe altered function in response to co-expression of these receptors. The technical questions raised by this study highlight the requirement for the application of extensive controls when using current methods for the demonstration of GPCR dimerisation. Similar findings in this field reflect the current controversy surrounding the investigation of GPCR dimerisation. Although GHS-R1a/GHS-R1b or GHS-R1a/GPR39 heterodimerisation was not clearly demonstrated, this study provides a basis for future investigations of these receptors in prostate cancer. Additionally, the results presented in this study and growing evidence in the literature highlight the requirement for an extensive understanding of the experimental method and the performance of a range of controls to avoid the spurious interpretation of data gained from artificial expression systems. The future development of more robust techniques for investigating GPCR dimerisation is clearly required and will enable us to elucidate whether GHS-R1a, GHS-R1b and GPR39 form physiologically relevant dimers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computation Fluid Dynamics (CFD) has become an important tool in optimization and has seen successful in many real world applications. Most important among these is in the optimisation of aerodynamic surfaces which has become Multi-Objective (MO) and Multidisciplinary (MDO) in nature. Most of these have been carried out for a given set of input parameters such as free stream Mach number and angle of attack. One cannot ignore the fact that in aerospace engineering one frequently deals with situations where the design input parameters and flight/flow conditions have some amount of uncertainty attached to them. When the optimisation is carried out for fixed values of design variables and parameters however, one arrives at an optimised solution that results in good performance at design condition but poor drag or lift to drag ratio at slightly off-design conditions. The challenge is still to develop a robust design that accounts for uncertainty in the design in aerospace applications. In this paper this issue is taken up and an attempt is made to prevent the fluctuation of objective performance by using robust design technique or Uncertainty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rice tungro bacilliform virus (RTBV) is one of the two viruses that cause tungro disease. Four RTBV strains maintained in the greenhouse for 4 years, G1, G2, Ic, and L, were differentiated by restriction fragment length polymorphism (RFLP) analysis of the native viral DNA. Although strains G1 and Ic had identical restriction patterns when cleaved with Pst1, BamHI, EcoRI, and EcoRV, they can be differentiated from strains G2 and L by EcoRI and EcoRV digestion. These same endonucleases also differentiate strain G2 from strain L. When total DNA extracts from infected plants were used instead of viral DNA, and digested with EcoRV, identical restriction patterns for each strain (G2 and L) were obtained from roots, leaves, and leaf sheaths of infected plants. The restriction patterns were consistent from plant to plant, in different varieties, and at different times after inoculation. This technique can be used to differentiate RTBV strains and determine the variability of a large number of field samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper formulates an analytically tractable problem for the wake generated by a long flat bottom ship by considering the steady free surface flow of an inviscid, incompressible fluid emerging from beneath a semi-infinite rigid plate. The flow is considered to be irrotational and two-dimensional so that classical potential flow methods can be exploited. In addition, it is assumed that the draft of the plate is small compared to the depth of the channel. The linearised problem is solved exactly using a Fourier transform and the Wiener-Hopf technique, and it is shown that there is a family of subcritical solutions characterised by a train of sinusoidal waves on the downstream free surface. The amplitude of these waves decreases as the Froude number increases. Supercritical solutions are also obtained, but, in general, these have infinite vertical velocities at the trailing edge of the plate. Consideration of further terms in the expansions suggests a way of canceling the singularity for certain values of the Froude number.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a multiscale study using the coupled Meshless technique/Molecular Dynamics (M2) for exploring the deformation mechanism of mono-crystalline metal (focus on copper) under uniaxial tension. In M2, an advanced transition algorithm using transition particles is employed to ensure the compatibility of both displacements and their gradients, and an effective local quasi-continuum approach is also applied to obtain the equivalent continuum strain energy density based on the atomistic poentials and Cauchy-Born rule. The key parameters used in M2 are firstly investigated using a benchmark problem. Then M2 is applied to the multiscale simulation for a mono-crystalline copper bar. It has found that the mono-crystalline copper has very good elongation property, and the ultimate strength and Young's modulus are much higher than those obtained in macro-scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Personalised social matching systems can be seen as recommender systems that recommend people to others in the social networks. However, with the rapid growth of users in social networks and the information that a social matching system requires about the users, recommender system techniques have become insufficiently adept at matching users in social networks. This paper presents a hybrid social matching system that takes advantage of both collaborative and content-based concepts of recommendation. The clustering technique is used to reduce the number of users that the matching system needs to consider and to overcome other problems from which social matching systems suffer, such as cold start problem due to the absence of implicit information about a new user. The proposed system has been evaluated on a dataset obtained from an online dating website. Empirical analysis shows that accuracy of the matching process is increased, using both user information (explicit data) and user behavior (implicit data).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bridges are valuable assets of every nation. They deteriorate with age and often are subjected to additional loads or different load patterns than originally designed for. These changes in loads can cause localized distress and may result in bridge failure if not corrected in time. Early detection of damage and appropriate retrofitting will aid in preventing bridge failures. Large amounts of money are spent in bridge maintenance all around the world. A need exists for a reliable technology capable of monitoring the structural health of bridges, thereby ensuring they operate safely and efficiently during the whole intended lives. Monitoring of bridges has been traditionally done by means of visual inspection. Visual inspection alone is not capable of locating and identifying all signs of damage, hence a variety of structural health monitoring (SHM) techniques is used regularly nowadays to monitor performance and to assess condition of bridges for early damage detection. Acoustic emission (AE) is one technique that is finding an increasing use in SHM applications of bridges all around the world. The chapter starts with a brief introduction to structural health monitoring and techniques commonly used for monitoring purposes. Acoustic emission technique, wave nature of AE phenomenon, previous applications and limitations and challenges in the use as a SHM technique are also discussed. Scope of the project and work carried out will be explained, followed by some recommendations of work planned in future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.