419 resultados para Application specific instruction-set processor


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experts in injection molding often refer to previous solutions to find a mold design similar to the current mold and use previous successful molding process parameters with intuitive adjustment and modification as a start for the new molding application. This approach saves a substantial amount of time and cost in experimental based corrective actions which are required in order to reach optimum molding conditions. A Case-Based Reasoning (CBR) System can perform the same task by retrieving a similar case which is applied to the new case from the case library and uses the modification rules to adapt a solution to the new case. Therefore, a CBR System can simulate human e~pertise in injection molding process design. This research is aimed at developing an interactive Hybrid Expert System to reduce expert dependency needed on the production floor. The Hybrid Expert System (HES) is comprised of CBR, flow analysis, post-processor and trouble shooting systems. The HES can provide the first set of operating parameters in order to achieve moldability condition and producing moldings free of stress cracks and warpage. In this work C++ programming language is used to implement the expert system. The Case-Based Reasoning sub-system is constructed to derive the optimum magnitude of process parameters in the cavity. Toward this end the Flow Analysis sub-system is employed to calculate the pressure drop and temperature difference in the feed system to determine the required magnitude of parameters at the nozzle. The Post-Processor is implemented to convert the molding parameters to machine setting parameters. The parameters designed by HES are implemented using the injection molding machine. In the presence of any molding defect, a trouble shooting subsystem can determine which combination of process parameters must be changed iii during the process to deal with possible variations. Constraints in relation to the application of this HES are as follows. - flow length (L) constraint: 40 mm < L < I 00 mm, - flow thickness (Th) constraint: -flow type: - material types: I mm < Th < 4 mm, unidirectional flow, High Impact Polystyrene (HIPS) and Acrylic. In order to test the HES, experiments were conducted and satisfactory results were obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research investigated students' construction of knowledge about the topics of magnetism and electricity emergent from a visit to an interactive science centre and subsequent classroom-based activities linked to the science centre exhibits. The significance of this study is that it analyses critically an aspect of school visits to informal learning centres that has been neglected by researchers in the past, namely the influence of post-visit activities in the classroom on subsequent learning and knowledge construction. Employing an interpretive methodology, the study focused on three areas of endeavour. Firstly, the establishment of a set of principles for the development of post-visit activities, from a constructivist framework, to facilitate students' learning of science. Secondly, to describe and interpret students' scientific understandings : prior t o a visit t o a science museum; following a visit t o a science museum; and following post-visit activities that were related to their museum experiences. Finally, to describe and interpret the ways in which students constructed their understandings: prior to a visit to a science museum; following a visit to a science museum; and following post-visit activities directly related to their museum experiences. The study was designed and implemented in three stages: 1) identification and establishment of the principles for design and evaluation of post-visit activities; 2) a pilot study of specific post-visit activities and data gathering strategies related to student construction of knowledge; and 3) interpretation of students' construction of knowledge from a visit to a science museum and subsequent completion of post-visit activities, which constituted the main study. Twelve students were selected from a year 7 class to participate in the study. This study provides evidence that the series of post-visit activities, related to the museum experiences, resulted in students constructing and reconstructing their personal knowledge of science concepts and principles represented in the science museum exhibits, sometimes towards the accepted scientific understanding and sometimes in different and surprising ways. Findings demonstrate the interrelationships between learning that occurs at school, at home and in informal learning settings. The study also underscores for teachers and staff of science museums and similar centres the importance of planning pre- and post-visit activities, not only to support the development of scientific conceptions, but also to detect and respond to alternative conceptions that may be produced or strengthened during a visit to an informal learning centre. Consistent with contemporary views of constructivism, the study strongly supports the views that : 1) knowledge is uniquely structured by the individual; 2) the processes of knowledge construction are gradual, incremental, and assimilative in nature; 3) changes in conceptual understanding are can be interpreted in the light of prior knowledge and understanding; and 4) knowledge and understanding develop idiosyncratically, progressing and sometimes appearing to regress when compared with contemporary science. This study has implications for teachers, students, museum educators, and the science education community given the lack of research into the processes of knowledge construction in informal contexts and the roles that post-visit activities play in the overall process of learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2006). IS-Impact is defined as "a measure at a point in time, of the stream of net benefits from the IS [Information System], to date and anticipated, as perceived by all key-user-groups" (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the "impact" half includes Organizational-Impact and Individual-Impact dimensions; the "quality" half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalisable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employs perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalisation of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. From examination of the literature, the study proposes that IS-Impact is an Analytic Theory. Gregor (2006) defines Analytic Theory simply as theory that ‘says what is’, base theory that is foundational to all other types of theory. The overarching research question thus is "Does IS-Impact positively manifest the attributes of Analytic Theory?" In order to address this question, we must first answer the question "What are the attributes of Analytic Theory?" The study identifies the main attributes of analytic theory as: (1) Completeness, (2) Mutual Exclusivity, (3) Parsimony, (4) Appropriate Hierarchy, (5) Utility, and (6) Intuitiveness. The value of empirical research in Information Systems is often assessed along the two main dimensions - rigor and relevance. Those Analytic Theory attributes associated with the ‘rigor’ of the IS-Impact model; namely, completeness, mutual exclusivity, parsimony and appropriate hierarchy, have been addressed in prior research (e.g. Gable et al, 2008). Though common tests of rigor are widely accepted and relatively uniformly applied (particularly in relation to positivist, quantitative research), attention to relevance has seldom been given the same systematic attention. This study assumes a mainly practice perspective, and emphasises the methodical evaluation of the Analytic Theory ‘relevance’ attributes represented by the Utility and Intuitiveness of the IS-Impact model. Thus, related research questions are: "Is the IS-Impact model intuitive to practitioners?" and "Is the IS-Impact model useful to practitioners?" March and Smith (1995), identify four outputs of Design Science: constructs, models, methods and instantiations (Design Science research may involve one or more of these). IS-Impact can be viewed as a design science model, composed of Design Science constructs (the four IS-Impact dimensions and the two model halves), and instantiations in the form of management information (IS-Impact data organised and presented for management decision making). In addition to methodically evaluating the Utility and Intuitiveness of the IS-Impact model and its constituent constructs, the study aims to also evaluate the derived management information. Thus, further research questions are: "Is the IS-Impact derived management information intuitive to practitioners?" and "Is the IS-Impact derived management information useful to practitioners? The study employs a longitudinal design entailing three surveys over 4 years (the 1st involving secondary data) of the Oracle-Financials application at QUT, interspersed with focus groups involving senior financial managers. The study too entails a survey of Financials at four other Australian Universities. The three focus groups respectively emphasise: (1) the IS-Impact model, (2) the 2nd survey at QUT (descriptive), and (3) comparison across surveys within QUT, and between QUT and the group of Universities. Aligned with the track goal of producing IS-Impact scores that are highly comparable, the study also addresses the more specific utility-related questions, "Is IS-Impact derived management information a useful comparator across time?" and "Is IS-Impact derived management information a useful comparator across universities?" The main contribution of the study is evidence of the utility and intuitiveness of IS-Impact to practice, thereby further substantiating the practical value of the IS-Impact approach; and also thereby motivating continuing and further research on the validity of IS-Impact, and research employing the ISImpact constructs in descriptive, predictive and explanatory studies. The study also has value methodologically as an example of relatively rigorous attention to relevance. A further key contribution is the clarification and instantiation of the full set of analytic theory attributes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrogel polymers are used for the manufacture of soft (or disposable) contact lenses worldwide today, but have a tendency to dehydrate on the eye. In vitro methods that can probe the potential for a given hydrogel polymer to dehydrate in vivo are much sought after. Nuclear magnetic resonance (NMR) has been shown to be effective in characterising water mobility and binding in similar systems (Barbieri, Quaglia et al., 1998, Larsen, Huff et al., 1990, Peschier, Bouwstra et al., 1993), predominantly through measurement of the spin-lattice relaxation time (T1), the spinspin relaxation time (T2) and the water diffusion coefficient (D). The aim of this work was to use NMR to quantify the molecular behaviour of water in a series of commercially available contact lens hydrogels, and relate these measurements to the binding and mobility of the water, and ultimately the potential for the hydrogel to dehydrate. As a preliminary study, in vitro evaporation rates were measured for a set of commercial contact lens hydrogels. Following this, comprehensive measurement of the temperature and water content dependencies of T1, T2 and D was performed for a series of commercial hydrogels that spanned the spectrum of equilibrium water content (EWC) and common compositions of contact lenses that are manufactured today. To quantify material differences, the data were then modelled based on theory that had been used for similar systems in the literature (Walker, Balmer et al., 1989, Hills, Takacs et al., 1989). The differences were related to differences in water binding and mobility. The evaporative results suggested that the EWC of the material was important in determining a material's potential to dehydrate in this way. Similarly, the NMR water self-diffusion coefficient was also found to be largely (if not wholly) determined by the WC. A specific binding model confirmed that the we was the dominant factor in determining the diffusive behaviour, but also suggested that subtle differences existed between the materials used, based on their equilibrium we (EWC). However, an alternative modified free volume model suggested that only the current water content of the material was important in determining the diffusive behaviour, and not the equilibrium water content. It was shown that T2 relaxation was dominated by chemical exchange between water and exchangeable polymer protons for materials that contained exchangeable polymer protons. The data was analysed using a proton exchange model, and the results were again reasonably correlated with EWC. Specifically, it was found that the average water mobility increased with increasing EWe approaching that of free water. The T1 relaxation was also shown to be reasonably well described by the same model. The main conclusion that can be drawn from this work is that the hydrogel EWe is an important parameter, which largely determines the behaviour of water in the gel. Higher EWe results in a hydrogel with water that behaves more like bulk water on average, or is less strongly 'bound' on average, compared with a lower EWe material. Based on the set of materials used, significant differences due to composition (for materials of the same or similar water content) could not be found. Similar studies could be used in the future to highlight hydrogels that deviate significantly from this 'average' behaviour, and may therefore have the least/greatest potential to dehydrate on the eye.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Patterns of connectivity among local populations influence the dynamics of regional systems, but most ecological models have concentrated on explaining the effect of connectivity on local population structure using dynamic processes covering short spatial and temporal scales. In this study, a model was developed in an extended spatial system to examine the hypothesis that long term connectivity levels among local populations are influenced by the spatial distribution of resources and other habitat factors. The habitat heterogeneity model was applied to local wild rabbit populations in the semi-arid Mitchell region of southern central Queensland (the Eastern system). Species' specific population parameters which were appropriate for the rabbit in this region were used. The model predicted a wide range of long term connectivity levels among sites, ranging from the extreme isolation of some sites to relatively high interaction probabilities for others. The validity of model assumptions was assessed by regressing model output against independent population genetic data, and explained over 80% of the variation in the highly structured genetic data set. Furthermore, the model was robust, explaining a significant proportion of the variation in the genetic data over a wide range of parameters. The performance of the habitat heterogeneity model was further assessed by simulating the widely reported recent range expansion of the wild rabbit into the Mitchell region from the adjacent, panmictic Western rabbit population system. The model explained well the independently determined genetic characteristics of the Eastern system at different hierarchic levels, from site specific differences (for example, fixation of a single allele in the population at one site), to differences between population systems (absence of an allele in the Eastern system which is present in all Western system sites). The model therefore explained the past and long term processes which have led to the formation and maintenance of the highly structured Eastern rabbit population system. Most animals exhibit sex biased dispersal which may influence long term connectivity levels among local populations, and thus the dynamics of regional systems. When appropriate sex specific dispersal characteristics were used, the habitat heterogeneity model predicted substantially different interaction patterns between female-only and combined male and female dispersal scenarios. In the latter case, model output was validated using data from a bi-parentally inherited genetic marker. Again, the model explained over 80% of the variation in the genetic data. The fact that such a large proportion of variability is explained in two genetic data sets provides very good evidence that habitat heterogeneity influences long term connectivity levels among local rabbit populations in the Mitchell region for both males and females. The habitat heterogeneity model thus provides a powerful approach for understanding the large scale processes that shape regional population systems in general. Therefore the model has the potential to be useful as a tool to aid in the management of those systems, whether it be for pest management or conservation purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prostate cancer is an important male health issue. The strategies used to diagnose and treat prostate cancer underscore the cell and molecular interactions that promote disease progression. Prostate cancer is histologically defined by increasingly undifferentiated tumour cells and therapeutically targeted by androgen ablation. Even as the normal glandular architecture of the adult prostate is lost, prostate cancer cells remain dependent on the androgen receptor (AR) for growth and survival. This project focused on androgen-regulated gene expression, altered cellular differentiation, and the nexus between these two concepts. The AR controls prostate development, homeostasis and cancer progression by regulating the expression of downstream genes. Kallikrein-related serine peptidases are prominent transcriptional targets of AR in the adult prostate. Kallikrein 3 (KLK3), which is commonly referred to as prostate-specific antigen, is the current serum biomarker for prostate cancer. Other kallikreins are potential adjunct biomarkers. As secreted proteases, kallikreins act through enzyme cascades that may modulate the prostate cancer microenvironment. Both as a panel of biomarkers and cascade of proteases, the roles of kallikreins are interconnected. Yet the expression and regulation of different kallikreins in prostate cancer has not been compared. In this study, a spectrum of prostate cell lines was used to evaluate the expression profile of all 15 members of the kallikrein family. A cluster of genes was co-ordinately expressed in androgenresponsive cell lines. This group of kallikreins included KLK2, 3, 4 and 15, which are located adjacent to one another at the centromeric end of the kallikrein locus. KLK14 was also of interest, because it was ubiquitously expressed among the prostate cell lines. Immunohistochemistry showed that these 5 kallikreins are co-expressed in benign and malignant prostate tissue. The androgen-regulated expression of KLK2 and KLK3 is well-characterised, but has not been compared with other kallikreins. Therefore, KLK2, 3, 4, 14 and 15 expression were all measured in time course and dose response experiments with androgens, AR-antagonist treatments, hormone deprivation experiments and cells transfected with AR siRNA. Collectively, these experiments demonstrated that prostatic kallikreins are specifically and directly regulated by the AR. The data also revealed that kallikrein genes are differentially regulated by androgens; KLK2 and KLK3 were strongly up-regulated, KLK4 and KLK15 were modestly up-regulated, and KLK14 was repressed. Notably, KLK14 is located at the telomeric end of the kallikrein locus, far away from the centromeric cluster of kallikreins that are stimulated by androgens. These results show that the expression of KLK2, 3, 4, 14 and 15 is maintained in prostate cancer, but that these genes exhibit different responses to androgens. This makes the kallikrein locus an ideal model to investigate AR signalling. The increasingly dedifferentiated phenotype of aggressive prostate cancer cells is accompanied by the re-expression of signalling molecules that are usually expressed during embryogenesis and foetal tissue development. The Wnt pathway is one developmental cascade that is reactivated in prostate cancer. The canonical Wnt cascade regulates the intracellular levels of β-catenin, a potent transcriptional co-activator of T-cell factor (TCF) transcription factors. Notably, β-catenin can also bind to the AR and synergistically stimulate androgen-mediated gene expression. This is at the expense of typical Wnt/TCF target genes, because the AR:β-catenin and TCF:β-catenin interactions are mutually exclusive. The effect of β-catenin on kallikrein expression was examined to further investigate the role of β-catenin in prostate cancer. Stable knockdown of β-catenin in LNCaP prostate cancer cells attenuated the androgen-regulated expression of KLK2, 3, 4 and 15, but not KLK14. To test whether KLK14 is instead a TCF:β-catenin target gene, the endogenous levels of β-catenin were increased by inhibiting its degradation. Although KLK14 expression was up-regulated by these treatments, siRNA knockdown of β-catenin demonstrated that this effect was independent of β-catenin. These results show that β-catenin is required for maximal expression of KLK2, 3, 4 and 15, but not KLK14. Developmental cells and tumour cells express a similar repertoire of signalling molecules, which means that these different cell types are responsive to one another. Previous reports have shown that stem cells and foetal tissues can reprogram aggressive cancer cells to less aggressive phenotypes by restoring the balance to developmental signalling pathways that are highly dysregulated in cancer. To investigate this phenomenon in prostate cancer, DU145 and PC-3 prostate cancer cells were cultured on matrices pre-conditioned with human embryonic stem cells (hESCs). Soft agar assays showed that prostate cancer cells exposed to hESC conditioned matrices had reduced clonogenicity compared with cells harvested from control matrices. A recent study demonstrated that this effect was partially due to hESC-derived Lefty, an antagonist of Nodal. A member of the transforming growth factor β (TGFβ) superfamily, Nodal regulates embryogenesis and is re-expressed in cancer. The role of Nodal in prostate cancer has not previously been reported. Therefore, the expression and function of the Nodal signalling pathway in prostate cancer was investigated. Western blots confirmed that Nodal is expressed in DU145 and PC-3 cells. Immunohistochemistry revealed greater expression of Nodal in malignant versus benign glands. Notably, the Nodal inhibitor, Lefty, was not expressed at the mRNA level in any prostate cell lines tested. The Nodal signalling pathway is functionally active in prostate cancer cells. Recombinant Nodal treatments triggered downstream phosphorylation of Smad2 in DU145 and LNCaP cells, and stably-transfected Nodal increased the clonogencity of LNCaP cells. Nodal was also found to modulate AR signalling. Nodal reduced the activity of an androgen-regulated KLK3 promoter construct in luciferase assays and attenuated the endogenous expression of AR target genes including prostatic kallikreins. These results demonstrate that Nodal is a novel example of a developmental signalling molecule that is reexpressed in prostate cancer and may have a functional role in prostate cancer progression. In summary, this project clarifies the role of androgens and changing cellular differentiation in prostate cancer by characterising the expression and function of the downstream genes encoding kallikrein-related serine proteases and Nodal. Furthermore, this study emphasises the similarities between prostate cancer and early development, and the crosstalk between developmental signalling pathways and the AR axis. The outcomes of this project also affirm the utility of the kallikrein locus as a model system to monitor tumour progression and the phenotype of prostate cancer cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When the supply voltages are balanced and sinusoidal, load compensation can give both unity power factor (UPF) and perfect harmonic cancellation (PHC) source currents. But under distorted supply voltages, achieving both UPF and PHC currents are not possible and contradictory to each other. Hence there should be an optimal performance between these two important compensation goals. This paper presents an optimal control algorithm for load compensation under unbalanced and distorted supply voltages. In this algorithm source currents are compensated for reactive, imbalance components and harmonic distortions set by the limits. By satisfying the harmonic distortion limits and power balance, this algorithm gives the source currents which will provide the maximum achievable power factor. The detailed simulation results using MATLAB are presented to support the performance of the proposed optimal control algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite a central role in angiosperm reproduction, few gametophyte-specific genes and promoters have been isolated, particularly for the inaccessible female gametophyte (embryo sac). Using the Ds-based enhancer-detector line ET253, we have cloned an egg apparatus-specific enhancer (EASE) from Arabidopsis (Arabidopsis thaliana). The genomic region flanking the Ds insertion site was further analyzed by examining its capability to control gusA and GFP reporter gene expression in the embryo sac in a transgenic context. Through analysis of a 5' and 3' deletion series in transgenic Arabidopsis, the sequence responsible for egg apparatus-specific expression was delineated to 77 bp. Our data showed that this enhancer is unique in the Arabidopsis genome, is conserved among different accessions, and shows an unusual pattern of sequence variation. This EASE works independently of position and orientation in Arabidopsis but is probably not associated with any nearby gene, suggesting either that it acts over a large distance or that a cryptic element was detected. Embryo-specific ablation in Arabidopsis was achieved by transactivation of a diphtheria toxin gene under the control of the EASE. The potential application of the EASE element and similar control elements as part of an open-source biotechnology toolkit for apomixis is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the host-specificity and -sensitivity of human- and bovine-specific adenoviruses (HS-AVs and BS-AVs) were evaluated by testing wastewater/fecal samples from various animal species in Southeast, Queensland, Australia. The overall specificity and sensitivity of the HS-AVs marker were 1.0 and 0.78, respectively. These figures for the BS-AVs were 1.0 and 0.73, respectively. Twenty environmental water samples were colleted during wet conditions and 20 samples were colleted during dry conditions from the Maroochy Coastal River and tested for the presence of fecal indicator bacteria (FIB), host-specific viral markers, zoonotic bacterial and protozoan pathogens using PCR/qPCR. The concentrations of FIB in water samples collected after wet conditions were generally higher compared to dry conditions. HS-AVs was detected in 20% water samples colleted during wet conditions and whereas BS-AVs was detected in both wet (i.e., 10%) and dry (i.e., 10%) conditions. Both, C. jejuni mapA and Salmonella invA genes were detected in 10% and 10% of samples, respectively collected during dry conditions. The concentrations of Salmonella invA ranged between 3.5 × 102 to 4.3 × 102 genomic copies per 500 ml of water G. lamblia β-giardin gene was detected only in one sample (5%) collected during the dry conditions. Weak or significant correlations were observed between FIB with viral markers and zoonotic pathogens. However, during dry conditions, no significant correlations were observed between FIB concentrations with viral markers and zoonotic pathogens. The prevalence of HS-AVs in samples collected from the study river suggests that the quality of water is affected by human fecal pollution and as well as bovine fecal pollution. The results suggest that HS-AVs and BS-AVs detection using PCR could be a useful tool for the identification of human sourced fecal pollution in coastal waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is focussed on developing a commissioning procedure so that a Monte Carlo model, which uses BEAMnrc’s standard VARMLC component module, can be adapted to match a specific BrainLAB m3 micro-multileaf collimator (μMLC). A set of measurements are recommended, for use as a reference against which the model can be tested and optimised. These include radiochromic film measurements of dose from small and offset fields, as well as measurements of μMLC transmission and interleaf leakage. Simulations and measurements to obtain μMLC scatter factors are shown to be insensitive to relevant model parameters and are therefore not recommended, unless the output of the linear accelerator model is in doubt. Ultimately, this note provides detailed instructions for those intending to optimise a VARMLC model to match the dose delivered by their local BrainLAB m3 μMLC device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the outcomes of a research project, which focused on developing a set of surrogate parameters to evaluate urban stormwater quality using simulated rainfall. Use of surrogate parameters has the potential to enhance the rapid generation of urban stormwater quality data based on on-site measurements and thereby reduce resource intensive laboratory analysis. The samples collected from rainfall simulations were tested for a range of physico-chemical parameters which are key indicators of nutrients, solids and organic matter. The analysis revealed that [total dissolved solids (TDS) and dissolved organic carbon (DOC)]; [total solids (TS) and total organic carbon (TOC)]; [turbidity (TTU)]; [electrical conductivity (EC)]; [TTU and EC] as appropriate surrogate parameters for dissolved total nitrogen (DTN), total phosphorus (TP), total suspended solids (TSS), TDS and TS respectively. Relationships obtained for DTN-TDS, DTN-DOC, and TP-TS demonstrated good portability potential. The portability of the relationship developed for TP and TOC was found to be unsatisfactory. The relationship developed for TDS-EC and TS-EC also demonstrated poor portability.