877 resultados para Real applications


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental task of vision systems is to infer the state of the world given some form of visual observations. From a computational perspective, this often involves facing an ill-posed problem; e.g., information is lost via projection of the 3D world into a 2D image. Solution of an ill-posed problem requires additional information, usually provided as a model of the underlying process. It is important that the model be both computationally feasible as well as theoretically well-founded. In this thesis, a probabilistic, nonlinear supervised computational learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human body or human hands, given images obtained via one or more uncalibrated cameras. The SMA consists of several specialized forward mapping functions that are estimated automatically from training data, and a possibly known feedback function. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). A probabilistic model for the architecture is first formalized. Solutions to key algorithmic problems are then derived: simultaneous learning of the specialized domains along with the mapping functions, as well as performing inference given inputs and a feedback function. The SMA employs a variant of the Expectation-Maximization algorithm and approximate inference. The approach allows the use of alternative conditional independence assumptions for learning and inference, which are derived from a forward model and a feedback model. Experimental validation of the proposed approach is conducted in the task of estimating articulated body pose from image silhouettes. Accuracy and stability of the SMA framework is tested using artificial data sets, as well as synthetic and real video sequences of human bodies and hands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on the construction of logical overlay networks has gained significance in recent times. This is partly due to work on peer-to-peer (P2P) systems for locating and retrieving distributed data objects, and also scalable content distribution using end-system multicast techniques. However, there are emerging applications that require the real-time transport of data from various sources to potentially many thousands of subscribers, each having their own quality-of-service (QoS) constraints. This paper primarily focuses on the properties of two popular topologies found in interconnection networks, namely k-ary n-cubes and de Bruijn graphs. The regular structure of these graph topologies makes them easier to analyze and determine possible routes for real-time data than complete or irregular graphs. We show how these overlay topologies compare in their ability to deliver data according to the QoS constraints of many subscribers, each receiving data from specific publishing hosts. Comparisons are drawn on the ability of each topology to route data in the presence of dynamic system effects, due to end-hosts joining and departing the system. Finally, experimental results show the service guarantees and physical link stress resulting from efficient multicast trees constructed over both kinds of overlay networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personal communication devices are increasingly equipped with sensors that are able to collect and locally store information from their environs. The mobility of users carrying such devices, and hence the mobility of sensor readings in space and time, opens new horizons for interesting applications. In particular, we envision a system in which the collective sensing, storage and communication resources, and mobility of these devices could be leveraged to query the state of (possibly remote) neighborhoods. Such queries would have spatio-temporal constraints which must be met for the query answers to be useful. Using a simplified mobility model, we analytically quantify the benefits from cooperation (in terms of the system's ability to satisfy spatio-temporal constraints), which we show to go beyond simple space-time tradeoffs. In managing the limited storage resources of such cooperative systems, the goal should be to minimize the number of unsatisfiable spatio-temporal constraints. We show that Data Centric Storage (DCS), or "directed placement", is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, "amorphous placement", in which sensory samples are cached locally, and shuffling of cached samples is used to diffuse the sensory data throughout the whole network. We evaluate conditions under which directed versus amorphous placement strategies would be more efficient. These results lead us to propose a hybrid placement strategy, in which the spatio-temporal constraints associated with a sensory data type determine the most appropriate placement strategy for that data type. We perform an extensive simulation study to evaluate the performance of directed, amorphous, and hybrid placement protocols when applied to queries that are subject to timing constraints. Our results show that, directed placement is better for queries with moderately tight deadlines, whereas amorphous placement is better for queries with looser deadlines, and that under most operational conditions, the hybrid technique gives the best compromise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Working memory neural networks are characterized which encode the invariant temporal order of sequential events that may be presented at widely differing speeds, durations, and interstimulus intervals. This temporal order code is designed to enable all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described that is based on the model of Seibert and Waxman [1].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The power consumption of wireless sensor networks (WSN) module is an important practical concern in building energy management (BEM) system deployments. A set of metrics are created to assess the power profiles of WSN in real world condition. The aim of this work is to understand and eventually eliminate the uncertainties in WSN power consumption during long term deployments and the compatibility with existing and emerging energy harvesting technologies. This paper investigates the key metrics in data processing, wireless data transmission, data sensing and duty cycle parameter to understand the system power profile from a practical deployment prospective. Based on the proposed analysis, the impacts of individual metric on power consumption in a typical BEM application are presented and the subsequent low power solutions are investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bioluminescence is the production of light by living organisms as a result of a number of enzyme catalysed reactions caused by enzymes termed luciferases. The lux genes responsible for the emission of light can be cloned from one bioluminescent microorganism into one that is not bioluminescent. The light emitted can be monitored and quantified and will provide information on the metabolic activity, quantity and location of cells in a particular environment, in real-time. The primary aim of this thesis was to investigate and identify several food industry related applications of lux-tagged microorganisms. The first aim was to monitor a lux-tagged Cronobacter sakazakii in reconstituted infant milk formula, in realtime. The second aim was to investigate a bioluminescent-based early warning system for starter culture disruption by bacteriophages and antibiotic residues. The third of this thesis was to examine the use of a bioluminescent-based assay to test the activity of bioengineered Nisin derivatives M21V and S29A against foodborne pathogens in laboratory media and selected foods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aging population in many countries brings into focus rising healthcare costs and pressure on conventional healthcare services. Pervasive healthcare has emerged as a viable solution capable of providing a technology-driven approach to alleviate such problems by allowing healthcare to move from the hospital-centred care to self-care, mobile care, and at-home care. The state-of-the-art studies in this field, however, lack a systematic approach for providing comprehensive pervasive healthcare solutions from data collection to data interpretation and from data analysis to data delivery. In this thesis we introduce a Context-aware Real-time Assistant (CARA) architecture that integrates novel approaches with state-of-the-art technology solutions to provide a full-scale pervasive healthcare solution with the emphasis on context awareness to help maintaining the well-being of elderly people. CARA collects information about and around the individual in a home environment, and enables accurately recognition and continuously monitoring activities of daily living. It employs an innovative reasoning engine to provide accurate real-time interpretation of the context and current situation assessment. Being mindful of the use of the system for sensitive personal applications, CARA includes several mechanisms to make the sophisticated intelligent components as transparent and accountable as possible, it also includes a novel cloud-based component for more effective data analysis. To deliver the automated real-time services, CARA supports interactive video and medical sensor based remote consultation. Our proposal has been validated in three application domains that are rich in pervasive contexts and real-time scenarios: (i) Mobile-based Activity Recognition, (ii) Intelligent Healthcare Decision Support Systems and (iii) Home-based Remote Monitoring Systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last two decades, semiconductor nanocrystals have been the focus of intense research due to their size dependant optical and electrical properties. Much is now known about how to control their size, shape, composition and surface chemistry, allowing fine control of their photophysical and electronic properties. However, genuine concerns have been raised regarding the heavy metal content of these materials, which is toxic even at relatively low concentrations and may limit their wide scale use. These concerns have driven the development of heavy metal free alternatives. In recent years, germanium nanocrystals (Ge NCs) have emerged as environmentally friendlier alternatives to II-VI and IV-VI semiconductor materials as they are nontoxic, biocompatible and electrochemically stable. This thesis reports the synthesis and characterisation of Ge NCs and their application as fluorescence probes for the detection of metal ions. A room-temperature method for the synthesis of size monodisperse Ge NCs within inverse micelles is reported, with well-defined core diameters that may be tuned from 3.5 to 4.5 nm. The Ge NCs are chemically passivated with amine ligands, minimising surface oxidation while rendering the NCs dispersible in a range of polar solvents. Regulation of the Ge NCs size is achieved by variation of the ammonium salts used to form the micelles. A maximum quantum yield of 20% is shown for the nanocrystals, and a transition from primarily blue to green emission is observed as the NC diameter increases from 3.5 to 4.5 nm. A polydisperse sample with a mixed emission profile is prepared and separated by centrifugation into individual sized NCs which each showed blue and green emission only, with total suppression of other emission colours. A new, efficient one step synthesis of Ge NCs with in situ passivation and straightforward purification steps is also reported. Ge NCs are formed by co-reduction of a mixture of GeCl4 and n-butyltrichlorogermane; the latter is used both as a capping ligand and a germanium source. The surface-bound layer of butyl chains both chemically passivates and stabilises the Ge NCs. Optical spectroscopy confirmed that these NCs are in the strong quantum confinement regime, with significant involvement of surface species in exciton recombination processes. The PL QY is determined to be 37 %, one of the highest values reported for organically terminated Ge NCs. A synthetic method is developed to produce size monodisperse Ge NCs with modified surface chemistries bearing carboxylic acid, acetate, amine and epoxy functional groups. The effect of these different surface terminations on the optical properties of the NCs is also studied. Comparison of the emission properties of these Ge NCs showed that the wavelength position of the PL maxima could be moved from the UV to the blue/green by choice of the appropriate surface group. We also report the application of water-soluble Ge NCs as a fluorescent sensing platform for the fast, highly selective and sensitive detection of Fe3+ ions. The luminescence quenching mechanism is confirmed by lifetime and absorbance spectroscopies, while the applicability of this assay for detection of Fe3+ in real water samples is investigated and found to satisfy the US Environmental Protection Agency requirements for Fe3+ levels in drinkable water supplies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work covered the fabrication and characterisation of impedance sensors for biological applications aiming in particular to the cytotoxicity monitoring of cultured cells exposed to different kind of chemical compounds and drugs and to the identification of different types of biological tissue (fat, muscles, nerves) using a sensor fabricated on the tip of a commercially available needle during peripheral nerve block procedures. Gold impedance electrodes have been successfully fabricated for impedance measurement on cells cultured on the electrode surface which was modified with the fabrication of gold nanopillars. These nanostructures have a height of 60nm or 100nm and they have highly ordered layout as they are fabricated through the e-beam technique. The fabrication of the threedimensional structures on the interdigitated electrodes was supposed to improve the sensitivity of the ECIS (electric cell-substrate impedance sensing) measurement while monitoring the cytotoxicity effects of two different drugs (Antrodia Camphorata extract and Nicotine) on three different cell lines (HeLa, A549 and BALBc 3T3) cultured on the impedance devices and change the morphology of the cells growing on the nanostructured electrodes. The fabrication of the nanostructures was achieved combining techniques like UV lithography, metal lift-off, evaporation and e-beam lithography techniques. The electrodes were packaged using a pressure sensitive, medical grade adhesive double-sided tape. The electrodes were then characterised with the aid of AFM and SEM imaging which confirmed the success of the fabrication processes showing the nanopillars fabricated with the right layout and dimensions figures. The introduction of nanopillars on the impedance electrodes, however, did not improve much the sensitivity of the assay with the exception of tests carried out with Nicotine. HeLa and A549 cells appeared to grow in a different way on the two surfaces, while no differences where noticed on the BALBc 3T3 cells. Impedance measurements obtained with the dead cells on the negative control electrodes or the test electrodes with the drugs can be compared to those done on the electrodes containing just media in the tested volume (as no cells are attached and cover the electrode surface). The impedance figures recorded using these electrodes were between 1.5kΩ and 2.5 kΩ, while the figures recorded on confluent cell layers range between 4kΩ and 5.5kΩ with peaks of almost 7 kΩ if there was more than one layer of cells growing on each other. There was then a very clear separation between the values of living cell compared to the dead ones which was almost 2.5 - 3kΩ. In this way it was very easy to determine whether the drugs affected the cells normal life cycle on not. However, little or no differences were noticed in the impedance analysis carried out on the two different kinds of electrodes using cultured cells. An increase of sensitivity was noticed only in a couple of experiments carried out on A549 cells growing on the nanostructured electrodes and exposed to different concentration of a solution containing Nicotine. More experiments to achieve a higher number of statistical evidences will be needed to prove these findings with an absolute confidence. The smart needle project aimed to reduce the limitations of the Electrical Nerve Stimulation (ENS) and the Ultra Sound Guided peripheral nerve block techniques giving the clinicians an additional tool for performing correctly the peripheral nerve block. Bioimpedance, as measured at the needle tip, provides additional information on needle tip location, thereby facilitating detection of intraneural needle placement. Using the needle as a precision instrument and guidance tool may provide additional information as to needle tip location and enhance safety in regional anaesthesia. In the time analysis, with the frequency fixed at 10kHz and the samples kept at 12°C, the approximate range for muscle bioimpedance was 203 – 616 Ω, the approximate bioimpedance range for fat was 5.02 - 17.8 kΩ and the approximate range for connective tissue was 790 Ω – 1.55 kΩ. While when the samples were heated at 37°C and measured again at 10kHz, the approximate bioimpedance range for muscle was 100-175Ω. The approximate bioimpedance range of fat was 627 Ω - 3.2 kΩ and the range for connective tissue was 221-540Ω. In the experiments done on the fresh slaughtered lamb carcass, replicating a scenario close to the real application, the impedance values recorded for fat were around 17 kΩ, for muscle and lean tissue around 1.3 kΩ while the nervous structures had an impedance value of 2.9 kΩ. With the data collected during this research, it was possible to conclude that measurements of bioimpedance at the needle tip location can give valuable information to the clinicians performing a peripheral nerve block procedure as the separation (in terms of impedance figures) was very marked between the different type of tissues. It is then feasible to use an impedance electrode fabricated on the needle tip to differentiate several tissues from the nerve tissue. Currently, several different methods are being studied to fabricate an impedance electrode on the surface of a commercially available needle used for the peripheral nerve block procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.

This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.

On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.

In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.

We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,

and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.

In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A shearing quotient (SQ) is a way of quantitatively representing the Phase I shearing edges on a molar tooth. Ordinary or phylogenetic least squares regression is fit to data on log molar length (independent variable) and log sum of measured shearing crests (dependent variable). The derived linear equation is used to generate an 'expected' shearing crest length from molar length of included individuals or taxa. Following conversion of all variables to real space, the expected value is subtracted from the observed value for each individual or taxon. The result is then divided by the expected value and multiplied by 100. SQs have long been the metric of choice for assessing dietary adaptations in fossil primates. Not all studies using SQ have used the same tooth position or crests, nor have all computed regression equations using the same approach. Here we focus on re-analyzing the data of one recent study to investigate the magnitude of effects of variation in 1) shearing crest inclusion, and 2) details of the regression setup. We assess the significance of these effects by the degree to which they improve or degrade the association between computed SQs and diet categories. Though altering regression parameters for SQ calculation has a visible effect on plots, numerous iterations of statistical analyses vary surprisingly little in the success of the resulting variables for assigning taxa to dietary preference. This is promising for the comparability of patterns (if not casewise values) in SQ between studies. We suggest that differences in apparent dietary fidelity of recent studies are attributable principally to tooth position examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specific and intentional location. Adverse effects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.

In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The first tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been fixed on an optical fiber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.

Hardware design changes were implemented to reduce the overall fiber diameter to <0.9 mm for the nano-crystalline scintillator based fiber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.

For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270˝. The NanoFOD exhibited small changes in angular sensitivity, with a coefficient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacrificed mouse and treatment was delivered at 225 kV with 0.3 mm Cu filter, the dose difference was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 µm thickness to measure small x-ray fields for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic film. Modest differences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 µm) and radiochromic film (320 µm), but these differences have been explained mostly as an artifact due to the geometry used and volumetric effects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al filtration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average difference of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD fiber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.

For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specific doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Differences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specific and x-ray tube filter specific dose rates for all irradiations.

Monte Carlo analysis was used on 1 µm resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on different radiation beam qualities relevant to x-ray and isotope type irradiators. Results and findings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 µm) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies confirmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically significant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Feller–Reuter–Riley function is a Markov transition function whose corresponding semigroup maps the set of the real-valued continuous functions vanishing at infinity into itself. The aim of this paper is to investigate applications of such functions in the dual problem, Markov branching processes, and the Williams-matrix. The remarkable property of a Feller–Reuter–Riley function is that it is a Feller minimal transition function with a stable q-matrix. By using this property we are able to prove that, in the theory of branching processes, the branching property is equivalent to the requirement that the corresponding transition function satisfies the Kolmogorov forward equations associated with a stable q-matrix. It follows that the probabilistic definition and the analytic definition for Markov branching processes are actually equivalent. Also, by using this property, together with the Resolvent Decomposition Theorem, a simple analytical proof of the Williams' existence theorem with respect to the Williams-matrix is obtained. The close link between the dual problem and the Feller–Reuter–Riley transition functions is revealed. It enables us to prove that a dual transition function must satisfy the Kolmogorov forward equations. A necessary and sufficient condition for a dual transition function satisfying the Kolmogorov backward equations is also provided.