905 resultados para Two-stage stochastic model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a result of a broad invitation extended by Professor Martin Betts, Executive Dean of the Faculty of Built Environment and Engineering, to the community of interest at QUT, a cross-disciplinary collaborative workshop was conducted to contribute ideas about responding to the Government of India’s urgent requirement to implement a program to re-house slum dwellers. This is a complex problem facing the Indian Ministry of Housing. Not only does the government aspire to eradicate existing slum conditions and to achieve tangible results within five years, but it must also ensure that slums do not form in the future. The workshop focused on technological innovation in construction to deliver transformation from the current unsanitary and overcrowded informal urban settlements to places that provide the economically weaker sections of Indian society with healthy, environmentally sustainable, economically viable mass housing that supports successful urban living. The workshop was conducted in two part process as follows: Initially, QUT academics from diverse fields shared current research and provided technical background to contextualise the challenge at a pre-workshop briefing session. This was followed by a one-day workshop during which participants worked intensively in multi-disciplinary groups through a series of exercises to develop innovative approaches to the complex problem of slum redevelopment. Dynamic, compressed work sessions, interspersed with cross-functional review and feedback by the whole group took place throughout the day. Reviews emphasised testing the concepts for their level of complexity, and likelihood of success. The two-stage workshop process achieved several objectives:  Inspired a sense of shared purpose amongst a diverse group of academics  Built participants’ knowledge of each other’s capacity  Engaged multi disciplinary team in an innovative design research process  Built participants’ confidence in the collaborative process  Demonstrated that collaborative problem solving can create solutions that represent transformative change.  Developed a framework of how workable solutions might be developed for the program through follow up workshops and charrettes of a similar nature involving stakeholders drawn from the context of the slum housing program management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms by using indirect inference. ABC methods are useful for posterior inference in the presence of an intractable likelihood function. In the indirect inference approach to ABC the parameters of an auxiliary model fitted to the data become the summary statistics. Although applicable to any ABC technique, we embed this approach within a sequential Monte Carlo algorithm that is completely adaptive and requires very little tuning. This methodological development was motivated by an application involving data on macroparasite population evolution modelled by a trivariate stochastic process for which there is no tractable likelihood function. The auxiliary model here is based on a beta–binomial distribution. The main objective of the analysis is to determine which parameters of the stochastic model are estimable from the observed data on mature parasite worms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a novel two stage approach to object localization and tracking using a network of wireless cameras and a mobile robot. In the first stage, a robot travels through the camera network while updating its position in a global coordinate frame which it broadcasts to the cameras. The cameras use this information, along with image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to track the objects. We present results with a nine node indoor camera network to demonstrate that this approach is feasible and offers acceptable level of accuracy in terms of object locations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metallic materials exposed to oxygen-enriched atmospheres – as commonly used in the medical, aerospace, aviation and numerous chemical processing industries – represent a significant fire hazard which must be addressed during design, maintenance and operation. Hence, accurate knowledge of metallic materials flammability is required. Reduced gravity (i.e. space-based) operations present additional unique concerns, where the absence of gravity must also be taken into account. The flammability of metallic materials has historically been quantified using three standardised test methods developed by NASA, ASTM and ISO. These tests typically involve the forceful (promoted) ignition of a test sample (typically a 3.2 mm diameter cylindrical rod) in pressurised oxygen. A test sample is defined as flammable when it undergoes burning that is independent of the ignition process utilised. In the standardised tests, this is indicated by the propagation of burning further than a defined amount, or „burn criterion.. The burn criterion in use at the onset of this project was arbitrarily selected, and did not accurately reflect the length a sample must burn in order to be burning independent of the ignition event and, in some cases, required complete consumption of the test sample for a metallic material to be considered flammable. It has been demonstrated that a) a metallic material.s propensity to support burning is altered by any increase in test sample temperature greater than ~250-300 oC and b) promoted ignition causes an increase in temperature of the test sample in the region closest to the igniter, a region referred to as the Heat Affected Zone (HAZ). If a test sample continues to burn past the HAZ (where the HAZ is defined as the region of the test sample above the igniter that undergoes an increase in temperature of greater than or equal to 250 oC by the end of the ignition event), it is burning independent of the igniter, and should be considered flammable. The extent of the HAZ, therefore, can be used to justify the selection of the burn criterion. A two dimensional mathematical model was developed in order to predict the extent of the HAZ created in a standard test sample by a typical igniter. The model was validated against previous theoretical and experimental work performed in collaboration with NASA, and then used to predict the extent of the HAZ for different metallic materials in several configurations. The extent of HAZ predicted varied significantly, ranging from ~2-27 mm depending on the test sample thermal properties and test conditions (i.e. pressure). The magnitude of the HAZ was found to increase with increasing thermal diffusivity, and decreasing pressure (due to slower ignition times). Based upon the findings of this work, a new burn criterion requiring 30 mm of the test sample to be consumed (from the top of the ignition promoter) was recommended and validated. This new burn criterion was subsequently included in the latest revision of the ASTM G124 and NASA 6001B international test standards that are used to evaluate metallic material flammability in oxygen. These revisions also have the added benefit of enabling the conduct of reduced gravity metallic material flammability testing in strict accordance with the ASTM G124 standard, allowing measurement and comparison of the relative flammability (i.e. Lowest Burn Pressure (LBP), Highest No-Burn Pressure (HNBP) and average Regression Rate of the Melting Interface(RRMI)) of metallic materials in normal and reduced gravity, as well as determination of the applicability of normal gravity test results to reduced gravity use environments. This is important, as currently most space-based applications will typically use normal gravity information in order to qualify systems and/or components for reduced gravity use. This is shown here to be non-conservative for metallic materials which are more flammable in reduced gravity. The flammability of two metallic materials, Inconel® 718 and 316 stainless steel (both commonly used to manufacture components for oxygen service in both terrestrial and space-based systems) was evaluated in normal and reduced gravity using the new ASTM G124-10 test standard. This allowed direct comparison of the flammability of the two metallic materials in normal gravity and reduced gravity respectively. The results of this work clearly show, for the first time, that metallic materials are more flammable in reduced gravity than in normal gravity when testing is conducted as described in the ASTM G124-10 test standard. This was shown to be the case in terms of both higher regression rates (i.e. faster consumption of the test sample – fuel), and burning at lower pressures in reduced gravity. Specifically, it was found that the LBP for 3.2 mm diameter Inconel® 718 and 316 stainless steel test samples decreased by 50% from 3.45 MPa (500 psia) in normal gravity to 1.72 MPa (250 psia) in reduced gravity for the Inconel® 718, and 25% from 3.45 MPa (500 psia) in normal gravity to 2.76 MPa (400 psia) in reduced gravity for the 316 stainless steel. The average RRMI increased by factors of 2.2 (27.2 mm/s in 2.24 MPa (325 psia) oxygen in reduced gravity compared to 12.8 mm/s in 4.48 MPa (650 psia) oxygen in normal gravity) for the Inconel® 718 and 1.6 (15.0 mm/s in 2.76 MPa (400 psia) oxygen in reduced gravity compared to 9.5 mm/s in 5.17 MPa (750 psia) oxygen in normal gravity) for the 316 stainless steel. Reasons for the increased flammability of metallic materials in reduced gravity compared to normal gravity are discussed, based upon the observations made during reduced gravity testing and previous work. Finally, the implications (for fire safety and engineering applications) of these results are presented and discussed, in particular, examining methods for mitigating the risk of a fire in reduced gravity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Travel in passenger cars is a ubiquitous aspect of the daily activities of many people. During the 2009 influenza A (H1N1) pandemic a case of probable transmission during car travel was reported in Australia, to which spread via the airborne route may have contributed. However, there are no data to indicate the likely risks of such events, and how they may vary and be mitigated. To address this knowledge gap, we estimated the risk of airborne influenza transmission in two cars (1989 model and 2005 model) by employing ventilation measurements and a variation of the Wells-Riley model. Results suggested that infection risk can be reduced by not recirculating air; however, estimated risk ranged from 59 to 99.9% for a 90 min trip when air was recirculated in the newer vehicle. These results have implications for interrupting in-car transmission of other illnesses spread by the airborne route.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is accepted that the efficiency of sugar cane clarification is closely linked with sugar juice composition (including suspended or insoluble impurities), the inorganic phosphate content, the liming condition and type, and the interactions between the juice components. These interactions are not well understood, particularly those between calcium, phosphate, and sucrose in sugar cane juice. Studies have been conducted on calcium oxide (CaO)/phosphate/sucrose systems in both synthetic and factory juices to provide further information on the defecation process (i.e., simple liming to effect impurity removal) and to identify an effective clarification process that would result in reduced scaling of sugar factory evaporators, pans, and centrifugals. Results have shown that a two-stage process involving the addition of lime saccharate to a set juice pH followed by the addition of sodium hydroxide to a final juice pH or a similar two-stage process where the order of addition of the alkalis is reversed prior to clarification reduces the impurity loading of the clarified juice compared to that of the clarified juice obtained by the conventional defecation process. The treatment process showed reductions in CaO (27% to 50%) and MgO (up to 20%) in clarified juices with no apparent loss in juice clarity or increase in residence time of the mud particles compared to those in the conventional process. There was also a reduction in the SiO2 content. However, the disadvantage of this process is the significant increase in the Na2O content.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multiple reaction monitoring mass spectrometric assay for the quantification of PYY in human plasma has been developed. A two stage sample preparation protocol was employed in which plasma containing the full length neuropeptide was first digested using trypsin, followed by solid-phase extraction to extract the digested peptide from the complex plasma matrix. The peptide extracts were analysed by LC-MS using multiple reaction monitoring to detect and quantify PYY. The method has been validated for plasma samples, yielding linear responses over the range 5–1,000 ng mL−1. The method is rapid, robust and specific for plasma PYY detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several track-before-detection approaches for image based aircraft detection have recently been examined in an important automated aircraft collision detection application. A particularly popular approach is a two stage processing paradigm which involves: a morphological spatial filter stage (which aims to emphasize the visual characteristics of targets) followed by a temporal or track filter stage (which aims to emphasize the temporal characteristics of targets). In this paper, we proposed new spot detection techniques for this two stage processing paradigm that fuse together raw and morphological images or fuse together various different morphological images (we call these approaches morphological reinforcement). On the basis of flight test data, the proposed morphological reinforcement operations are shown to offer superior signal to-noise characteristics when compared to standard spatial filter options (such as the close-minus-open and adaptive contour morphological operations). However, system operation characterised curves, which examine detection verses false alarm characteristics after both processing stages, illustrate that system performance is very data dependent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Genome-wide association studies (GWAS) have identified more than 30 prostate cancer (PrCa) susceptibility loci. One of these (rs2735839) is located close to a plausible candidate susceptibility gene, KLK3, which encodes prostate-specific antigen (PSA). PSA is widely used as a biomarker for PrCa detection and disease monitoring. To refine the association between PrCa and variants in this region, we used genotyping data from a two-stage GWAS using samples from the UK and Australia, and the Cancer Genetic Markers of Susceptibility (CGEMS) study. Genotypes were imputed for 197 and 312 single nucleotide polymorphisms (SNPs) from HapMap2 and the 1000 Genome Project, respectively. The most significant association with PrCa was with a previously unidentified SNP, rs17632542 (combined P = 3.9 × 10−22). This association was confirmed by direct genotyping in three stages of the UK/Australian GWAS, involving 10,405 cases and 10,681 controls (combined P = 1.9 × 10−34). rs17632542 is also shown to be associated with PSA levels and it is a non-synonymous coding SNP (Ile179Thr) in KLK3. Using molecular dynamic simulation, we showed evidence that this variant has the potential to introduce alterations in the protein or affect RNA splicing. We propose that rs17632542 may directly influence PrCa risk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experiments on atmospheric two-stage fluidized bed drying of bovine intestines with heat pump were carried out. The investigation covers innovative fluidized bed heat pump drying of bovine intestines. The two-stage drying consists of atmospheric moisture sublimation immediately followed by evaporation. Studies were done to establish the influence of the drying condition on the drying characteristics and product quality of bovine intestines and properties focusing on kinetics, diffusion, and color. The investigation of the drying characteristics has been conducted during moisture removal by evaporation and combined sublimation and evaporation. The effect of drying temperature on the drying constants was determined by fitting the experimental data using regression analysis techniques. The investigation revealed that the drying kinetics is most significantly affected by temperature. Correlations expressing the drying constants and effective moisture diffusivity dependence on the drying conditions are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the drivers of innovation have been studied extensively in construction, greater attention is required on how innovation diffusion can be effectively assessed within this complex and interdependent project-based industry. The authors draw on a highly cited innovation diffusion model by Rogers (2006) and develop a tailored conceptual framework to guide future empirical work aimed at assessing innovation diffusion in construction. The conceptual framework developed and discussed in this paper supports a five-stage process model of innovation diffusion namely: 1) knowledge and idea generation, 2) persuasion and evaluation; 3) decision to adopt, 4) integration and implementation, and 5) confirmation. As its theoretical contribution, this paper proposes three critical measurements constructs which can be used to assess the effectiveness of the diffusion process. These measurement constructs comprise: 1) nature and introduction of an innovative idea, 2) organizational capacity to acquire, assimilate, transform and exploit an innovation, and 3) rates of innovation facilitation and adoption. The constructs are interpreted in the project-based context of the construction industry, extending the contribution of general management theorists. Research planned by the authors will test the validity and reliability of the constructs developed in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compared the performance of a local and three robust optimality criteria in terms of the standard error for a one-parameter and a two-parameter nonlinear model with uncertainty in the parameter values. The designs were also compared in conditions where there was misspecification in the prior parameter distribution. The impact of different correlation between parameters on the optimal design was examined in the two-parameter model. The designs and standard errors were solved analytically whenever possible and numerically otherwise.