948 resultados para HETEROGENEOUS ELASTOGRAPHY PHANTOMS
Resumo:
Detracking and heterogeneous groupwork are two educational practices that have been shown to have promise for affording all students needed learning opportunities to develop mathematical proficiency. However, teachers face significant pedagogical challenges in organizing productive groupwork in these settings. This study offers an analysis of one teacher’s role in creating a classroom system that supported student collaboration within groups in a detracked, heterogeneous geometry classroom. The analysis focuses on four categories of the teacher’s work that created a set of affordances to support within group collaborative practices and links the teacher’s work with principles of complex systems.
Resumo:
The consumption capital asset pricing model is the standard economic model used to capture stock market behavior. However, empirical tests have pointed out to its inability to account quantitatively for the high average rate of return and volatility of stocks over time for plausible parameter values. Recent research has suggested that the consumption of stockholders is more strongly correlated with the performance of the stock market than the consumption of non-stockholders. We model two types of agents, non-stockholders with standard preferences and stock holders with preferences that incorporate elements of the prospect theory developed by Kahneman and Tversky (1979). In addition to consumption, stockholders consider fluctuations in their financial wealth explicitly when making decisions. Data from the Panel Study of Income Dynamics are used to calibrate the labor income processes of the two types of agents. Each agent faces idiosyncratic shocks to his labor income as well as aggregate shocks to the per-share dividend but markets are incomplete and agents cannot hedge consumption risks completely. In addition, consumers face both borrowing and short-sale constraints. Our results show that in equilibrium, agents hold different portfolios. Our model is able to generate a time-varying risk premium of about 5.5% while maintaining a low risk free rate, thus suggesting a plausible explanation for the equity premium puzzle reported by Mehra and Prescott (1985).
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
The Radiological Physics Center (RPC) uses both on-site and remote reviews to credential institutions for participation in clinical trials. Anthropomorphic quality assurance (QA) phantoms are one tool the RPC uses to remotely audit institutions, which include thermoluminescent dosimeters (TLDs) and radiochromic film. The RPC desires to switch from TLD as the absolute dosimeter in the phantoms, to optically stimulated luminescent dosimeters (OSLDs), but a problem lies in the angular dependence exhibited by the OSLD. The purpose of this study was to characterize the angular dependence of OSLD and establish a correction factor if necessary, to provide accurate dosimetric measurements as a replacement for TLD in the QA phantoms. A 10 cm diameter high-impact polystyrene spherical phantom was designed and constructed to hold an OSLD to study the angular response of the dosimeter under the simplest of circumstances for both coplanar and non-coplanar treatment deliveries. OSLD were irradiated in the spherical phantom, and the responses of the dosimeter from edge-on angles were normalized to the response when irradiated with the beam incident normally on the surface of the dosimeter. The average normalized response was used to establish an angular correction factor for 6 MV and 18 coplanar treatments, and for 6 MV non-coplanar treatments specific to CyberKnife. The RPC pelvic phantom dosimetry insert was modified to hold OSLD, in addition to the TLD, adjacent to the planes of film. Treatment plans of increasing angular beam delivery were developed, three in Pinnacle v9.0 (4-field box, IMRT, and VMAT) and one in Accuray’s MultiPlan v3.5.3 (CyberKnife). The plans were delivered to the pelvic phantom containing both TLD and OSLD in the target volume. The pelvic phantom was also sent to two institutions to be irradiated as trials, one delivering IMRT, and the other a CyberKnife treatment. For the IMRT deliveries and the two institution trials, the phantom also included film in the sagittal and coronal planes. The doses measured from the TLD and OSLD were calculated for each irradiation, and the angular correction factors established from the spherical phantom irradiations were applied to the OSLD dose. The ratio of the TLD dose to the angular corrected OSLD dose was calculated for each irradiation. The corrected OSLD dose was found to be within 1% of the TLD measured dose for all irradiations, with the exception of the in-house CyberKnife deliveries. The films were normalized to both TLD measured dose and the corrected OSLD dose. Dose profiles were obtained and gamma analysis was performed using a 7%/4 mm criteria, to compare the ability of the OSLD, when corrected for the angular dependence, to provide equivalent results to TLD. The results of this study indicate that the OSLD can effectively be used as a replacement for TLD in the RPC’s anthropomorphic QA phantoms for coplanar treatment deliveries when a correction is applied for the dosimeter’s angular dependence.
Resumo:
The mechanisms underlying cellular response to proteasome inhibitors have not been clearly elucidated in solid tumor models. Evidence suggests that the ability of a cell to manage the amount of proteotoxic stress following proteasome inhibition dictates survival. In this study using the FDA-approved proteasome inhibitor bortezomib (Velcade®) in solid tumor cells, we demonstrated that perhaps the most critical response to proteasome inhibition is repression of global protein synthesis by phosphorylation of the eukaryotic initiation factor 2-α subunit (eIF2α). In a panel of 10 distinct human pancreatic cancer cells, we showed marked heterogeneity in the ability of cancer cells to induce eIF2α phosphorylation upon stress (eIF2α-P); lack of inducible eIF2α-P led to excessive accumulation of aggregated proteins, reactive oxygen species, and ultimately cell death. In addition, we examined complementary cytoprotective mechanisms involving the activation of the heat shock response (HSR), and found that induction of heat shock protein 70 kDa (Hsp72) protected against proteasome inhibitor-induced cell death in human bladder cancer cells. Finally, investigation of a novel histone deacetylase 6 (HDAC6)-selective inhibitor suggested that the cytoprotective role of the cytoplasmic histone deacetylase 6 (HDAC6) in response to proteasome inhibition may have been previously overestimated.
Resumo:
DEVELOPMENT AND IMPLEMENTATION OF A DYNAMIC HETEROGENEOUS PROTON EQUIVALENT ANTHROPOMORPHIC THORAX PHANTOM FOR THE ASSESSMENT OF SCANNED PROTON BEAM THERAPY by James Leroy Neihart, B.S. APPROVED: ______________________________David Followill, Ph.D. ______________________________Peter Balter, Ph.D. ______________________________Narayan Sahoo, Ph.D. ______________________________Kenneth Hess, Ph.D. ______________________________Paige Summers, M.S. APPROVED: ____________________________ Dean, The University of Texas Graduate School of Biomedical Sciences at Houston DEVELOPMENT AND IMPLEMENTATION OF A DYNAMIC HETEROGENEOUS PROTON EQUIVALENT ANTHROPOMORPHIC THORAX PHANTOM FOR THE ASSESSMENT OF SCANNED PROTON BEAM THERAPY A THESIS Presented to the Faculty of The University of Texas Health Science Center at Houston andThe University of TexasMD Anderson Cancer CenterGraduate School of Biomedical Sciences in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE by James Leroy Neihart, B.S. Houston, Texas Date of Graduation August, 2013 Acknowledgments I would like to acknowledge my advisory committee members, chair David Followill, Ph.D., Peter Balter, Ph.D, Narayan Sahoo, Ph.D., Kenneth Hess, Ph.D., Paige Summers M.S. and, for their time and effort contributed to this project. I would additionally like to thank the faculty and staff at the PTC-H and the RPC who assisted in many aspects of this project. Falk Pӧnisch, Ph.D. for his breath hold proton therapy treatment expertise, Matt Palmer and Jaques Bluett for proton dosimetry assistance, Matt Kerr for verification plan assistance, Carrie Amador, Nadia Hernandez, Trang Nguyen, Andrea Molineu, Lynda McDonald for TLD and film dosimetry assistance. Finally, I would like to thank my wife and family for their support and encouragement during my research and studies. Development and implementation of a dynamic heterogeneous proton equivalent anthropomorphic thorax phantom for the assessment of scanned proton beam therapy By: James Leroy Neihart, B.S. Chair of Advisory Committee: David Followill, Ph.D Proton therapy has been gaining ground recently in radiation oncology. To date, the most successful utilization of proton therapy is in head and neck cases as well as prostate cases. These tumor locations do not suffer from the resulting difficulties of treatment delivery as a result of respiratory motion. Lung tumors require either breath hold or motion tracking, neither of which have been assessed with an end-to-end phantom for proton treatments. Currently, the RPC does not have a dynamic thoracic phantom for proton therapy procedure assessment. Additionally, such a phantom could be an excellent means of assessing quality assurance of the procedures of proton therapy centers wishing to participate in clinical trials. An eventual goal of this phantom is to have a means of evaluating and auditing institutions for the ability to start clinical trials utilizing proton therapy procedures for lung cancers. Therefore, the hypothesis of this study is that a dynamic anthropomorphic thoracic phantom can be created to evaluate end-to-end proton therapy treatment procedures for lung cancer to assure agreement between the measured and calculated dose within 5% / 5 mm with a reproducibility of 2%. Multiple materials were assessed for thoracic heterogeneity equivalency. The phantom was designed from the materials found to be in greatest agreement. The phantom was treated in an end-to-end treatment four times, which included simulation, treatment planning and treatment delivery. Each treatment plan was delivered three times to assess reproducibility. The dose measured within the phantom was compared to that of the treatment plan. The hypothesis was fully supported for three of the treatment plans, but failed the reproducibility requirement for the most aggressive treatment plan.
Resumo:
This paper develops a micro-simulation framework for multinational entry and sales activities across countries. The model is based on Eaton, Kortum, and Kramarz's (2010) quantitative trade model adapted towards multinational production. Using micro data on Japanese manufacturing firms, we first stylize the empirical regularities of multinational entry and sales activity and estimate the model's structural parameters with simulated method of moments. We then demonstrate that our adapted model is able to replicate important dimensions of the in-sample moments conditioned in our estimation strategy. Importantly, it is able to replicate activity under an economic period with a far different level of FDI barriers than was conditioned upon in our estimation sample. Overall, our research highlights the richness of the simulation framework for performing counterfactual analysis of various FDI policies.
Resumo:
During the past decade of declining FDI barriers, small domestic firms disproportionately contracted while large multinational firms experienced a substantial growth in Japan’s manufacturing sector. This paper quantitatively assesses the impact of FDI globalization on intra-industry reallocations and aggregate productivity. We calibrate the firm-heterogeneity model of Eaton, Kortum, and Kramarz (2011) to micro-level data on Japanese multinational firms. Estimating the structural parameters of the model, we demonstrate that the model can strongly replicate the entry and sales patterns of Japanese multinationals. Counterfactual simulations show that declining FDI barriers lead to a disproportionate expansion of foreign production by more efficient firms relative to less efficient firms. A hypothetical 20% reduction in FDI barriers is found to generate a 30.7% improvement in aggregate productivity through market-share reallocation.
Resumo:
This study extends Melitz's model with heterogeneous firms by introducing shared fixed costs in a marketplace. It aims to explain heterogeneous firms' choice between traditional marketplaces and modern distribution channels on the basis of their productivities. The results reveal that the co-existence of a traditional marketplace and modern distribution channels improves social welfare. In addition, a deregulation policy for firm entry outside a marketplace and accumulation of human capital are factors that contribute to improve the social welfare.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
This article proposes a MAS architecture for network diagnosis under uncertainty. Network diagnosis is divided into two inference processes: hypothesis generation and hypothesis confirmation. The first process is distributed among several agents based on a MSBN, while the second one is carried out by agents using semantic reasoning. A diagnosis ontology has been defined in order to combine both inference processes. To drive the deliberation process, dynamic data about the influence of observations are taken during diagnosis process. In order to achieve quick and reliable diagnoses, this influence is used to choose the best action to perform. This approach has been evaluated in a P2P video streaming scenario. Computational and time improvements are highlight as conclusions.
Resumo:
In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the “Smart Grid” which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.
Resumo:
The wetting front is the zone where water invades and advances into an initially dry porous material and it plays a crucial role in solute transport through the unsaturated zone. Water is an essential part of the physiological process of all plants. Through water, necessary minerals are moved from the roots to the parts of the plants that require them. Water moves chemicals from one part of the plant to another. It is also required for photosynthesis, for metabolism and for transpiration. The leaching of chemicals by wetting fronts is influenced by two major factors, namely: the irregularity of the fronts and heterogeneity in the distribution of chemicals, both of which have been described by using fractal techniques. Soil structure can significantly modify infiltration rates and flow pathways in soils. Relations between features of soil structure and features of infiltration could be elucidated from the velocities and the structure of wetting fronts. When rainwater falls onto soil, it doesn?t just pool on surfaces. Water ?or another fluid- acts differently on porous surfaces. If the surface is permeable (porous) it seeps down through layers of soil, filling that layer to capacity. Once that layer is filled, it moves down into the next layer. In sandy soil, water moves quickly, while it moves much slower through clay soil. The movement of water through soil layers is called the the wetting front. Our research concerns the motion of a liquid into an initially dry porous medium. Our work presents a theoretical framework for studying the physical interplay between a stationary wetting front of fractal dimension D with different porous materials. The aim was to model the mass geometry interplay by using the fractal dimension D of a stationary wetting front. The plane corresponding to the image is divided in several squares (the minimum correspond to the pixel size) of size length ". We acknowledge the help of Prof. M. García Velarde and the facilities offered by the Pluri-Disciplinary Institute of the Complutense University of Madrid. We also acknowledge the help of European Community under project Multi-scale complex fluid flows and interfacial phenomena (PITN-GA-2008-214919). Thanks are also due to ERCOFTAC (PELNoT, SIG 14)
Resumo:
In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.