895 resultados para Heterogeneous multiprocessors
Resumo:
The consumption capital asset pricing model is the standard economic model used to capture stock market behavior. However, empirical tests have pointed out to its inability to account quantitatively for the high average rate of return and volatility of stocks over time for plausible parameter values. Recent research has suggested that the consumption of stockholders is more strongly correlated with the performance of the stock market than the consumption of non-stockholders. We model two types of agents, non-stockholders with standard preferences and stock holders with preferences that incorporate elements of the prospect theory developed by Kahneman and Tversky (1979). In addition to consumption, stockholders consider fluctuations in their financial wealth explicitly when making decisions. Data from the Panel Study of Income Dynamics are used to calibrate the labor income processes of the two types of agents. Each agent faces idiosyncratic shocks to his labor income as well as aggregate shocks to the per-share dividend but markets are incomplete and agents cannot hedge consumption risks completely. In addition, consumers face both borrowing and short-sale constraints. Our results show that in equilibrium, agents hold different portfolios. Our model is able to generate a time-varying risk premium of about 5.5% while maintaining a low risk free rate, thus suggesting a plausible explanation for the equity premium puzzle reported by Mehra and Prescott (1985).
Resumo:
The mechanisms underlying cellular response to proteasome inhibitors have not been clearly elucidated in solid tumor models. Evidence suggests that the ability of a cell to manage the amount of proteotoxic stress following proteasome inhibition dictates survival. In this study using the FDA-approved proteasome inhibitor bortezomib (Velcade®) in solid tumor cells, we demonstrated that perhaps the most critical response to proteasome inhibition is repression of global protein synthesis by phosphorylation of the eukaryotic initiation factor 2-α subunit (eIF2α). In a panel of 10 distinct human pancreatic cancer cells, we showed marked heterogeneity in the ability of cancer cells to induce eIF2α phosphorylation upon stress (eIF2α-P); lack of inducible eIF2α-P led to excessive accumulation of aggregated proteins, reactive oxygen species, and ultimately cell death. In addition, we examined complementary cytoprotective mechanisms involving the activation of the heat shock response (HSR), and found that induction of heat shock protein 70 kDa (Hsp72) protected against proteasome inhibitor-induced cell death in human bladder cancer cells. Finally, investigation of a novel histone deacetylase 6 (HDAC6)-selective inhibitor suggested that the cytoprotective role of the cytoplasmic histone deacetylase 6 (HDAC6) in response to proteasome inhibition may have been previously overestimated.
Resumo:
DEVELOPMENT AND IMPLEMENTATION OF A DYNAMIC HETEROGENEOUS PROTON EQUIVALENT ANTHROPOMORPHIC THORAX PHANTOM FOR THE ASSESSMENT OF SCANNED PROTON BEAM THERAPY by James Leroy Neihart, B.S. APPROVED: ______________________________David Followill, Ph.D. ______________________________Peter Balter, Ph.D. ______________________________Narayan Sahoo, Ph.D. ______________________________Kenneth Hess, Ph.D. ______________________________Paige Summers, M.S. APPROVED: ____________________________ Dean, The University of Texas Graduate School of Biomedical Sciences at Houston DEVELOPMENT AND IMPLEMENTATION OF A DYNAMIC HETEROGENEOUS PROTON EQUIVALENT ANTHROPOMORPHIC THORAX PHANTOM FOR THE ASSESSMENT OF SCANNED PROTON BEAM THERAPY A THESIS Presented to the Faculty of The University of Texas Health Science Center at Houston andThe University of TexasMD Anderson Cancer CenterGraduate School of Biomedical Sciences in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE by James Leroy Neihart, B.S. Houston, Texas Date of Graduation August, 2013 Acknowledgments I would like to acknowledge my advisory committee members, chair David Followill, Ph.D., Peter Balter, Ph.D, Narayan Sahoo, Ph.D., Kenneth Hess, Ph.D., Paige Summers M.S. and, for their time and effort contributed to this project. I would additionally like to thank the faculty and staff at the PTC-H and the RPC who assisted in many aspects of this project. Falk Pӧnisch, Ph.D. for his breath hold proton therapy treatment expertise, Matt Palmer and Jaques Bluett for proton dosimetry assistance, Matt Kerr for verification plan assistance, Carrie Amador, Nadia Hernandez, Trang Nguyen, Andrea Molineu, Lynda McDonald for TLD and film dosimetry assistance. Finally, I would like to thank my wife and family for their support and encouragement during my research and studies. Development and implementation of a dynamic heterogeneous proton equivalent anthropomorphic thorax phantom for the assessment of scanned proton beam therapy By: James Leroy Neihart, B.S. Chair of Advisory Committee: David Followill, Ph.D Proton therapy has been gaining ground recently in radiation oncology. To date, the most successful utilization of proton therapy is in head and neck cases as well as prostate cases. These tumor locations do not suffer from the resulting difficulties of treatment delivery as a result of respiratory motion. Lung tumors require either breath hold or motion tracking, neither of which have been assessed with an end-to-end phantom for proton treatments. Currently, the RPC does not have a dynamic thoracic phantom for proton therapy procedure assessment. Additionally, such a phantom could be an excellent means of assessing quality assurance of the procedures of proton therapy centers wishing to participate in clinical trials. An eventual goal of this phantom is to have a means of evaluating and auditing institutions for the ability to start clinical trials utilizing proton therapy procedures for lung cancers. Therefore, the hypothesis of this study is that a dynamic anthropomorphic thoracic phantom can be created to evaluate end-to-end proton therapy treatment procedures for lung cancer to assure agreement between the measured and calculated dose within 5% / 5 mm with a reproducibility of 2%. Multiple materials were assessed for thoracic heterogeneity equivalency. The phantom was designed from the materials found to be in greatest agreement. The phantom was treated in an end-to-end treatment four times, which included simulation, treatment planning and treatment delivery. Each treatment plan was delivered three times to assess reproducibility. The dose measured within the phantom was compared to that of the treatment plan. The hypothesis was fully supported for three of the treatment plans, but failed the reproducibility requirement for the most aggressive treatment plan.
Resumo:
This paper develops a micro-simulation framework for multinational entry and sales activities across countries. The model is based on Eaton, Kortum, and Kramarz's (2010) quantitative trade model adapted towards multinational production. Using micro data on Japanese manufacturing firms, we first stylize the empirical regularities of multinational entry and sales activity and estimate the model's structural parameters with simulated method of moments. We then demonstrate that our adapted model is able to replicate important dimensions of the in-sample moments conditioned in our estimation strategy. Importantly, it is able to replicate activity under an economic period with a far different level of FDI barriers than was conditioned upon in our estimation sample. Overall, our research highlights the richness of the simulation framework for performing counterfactual analysis of various FDI policies.
Resumo:
During the past decade of declining FDI barriers, small domestic firms disproportionately contracted while large multinational firms experienced a substantial growth in Japan’s manufacturing sector. This paper quantitatively assesses the impact of FDI globalization on intra-industry reallocations and aggregate productivity. We calibrate the firm-heterogeneity model of Eaton, Kortum, and Kramarz (2011) to micro-level data on Japanese multinational firms. Estimating the structural parameters of the model, we demonstrate that the model can strongly replicate the entry and sales patterns of Japanese multinationals. Counterfactual simulations show that declining FDI barriers lead to a disproportionate expansion of foreign production by more efficient firms relative to less efficient firms. A hypothetical 20% reduction in FDI barriers is found to generate a 30.7% improvement in aggregate productivity through market-share reallocation.
Resumo:
This study extends Melitz's model with heterogeneous firms by introducing shared fixed costs in a marketplace. It aims to explain heterogeneous firms' choice between traditional marketplaces and modern distribution channels on the basis of their productivities. The results reveal that the co-existence of a traditional marketplace and modern distribution channels improves social welfare. In addition, a deregulation policy for firm entry outside a marketplace and accumulation of human capital are factors that contribute to improve the social welfare.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
This article proposes a MAS architecture for network diagnosis under uncertainty. Network diagnosis is divided into two inference processes: hypothesis generation and hypothesis confirmation. The first process is distributed among several agents based on a MSBN, while the second one is carried out by agents using semantic reasoning. A diagnosis ontology has been defined in order to combine both inference processes. To drive the deliberation process, dynamic data about the influence of observations are taken during diagnosis process. In order to achieve quick and reliable diagnoses, this influence is used to choose the best action to perform. This approach has been evaluated in a P2P video streaming scenario. Computational and time improvements are highlight as conclusions.
Resumo:
In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the “Smart Grid” which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.
Resumo:
The wetting front is the zone where water invades and advances into an initially dry porous material and it plays a crucial role in solute transport through the unsaturated zone. Water is an essential part of the physiological process of all plants. Through water, necessary minerals are moved from the roots to the parts of the plants that require them. Water moves chemicals from one part of the plant to another. It is also required for photosynthesis, for metabolism and for transpiration. The leaching of chemicals by wetting fronts is influenced by two major factors, namely: the irregularity of the fronts and heterogeneity in the distribution of chemicals, both of which have been described by using fractal techniques. Soil structure can significantly modify infiltration rates and flow pathways in soils. Relations between features of soil structure and features of infiltration could be elucidated from the velocities and the structure of wetting fronts. When rainwater falls onto soil, it doesn?t just pool on surfaces. Water ?or another fluid- acts differently on porous surfaces. If the surface is permeable (porous) it seeps down through layers of soil, filling that layer to capacity. Once that layer is filled, it moves down into the next layer. In sandy soil, water moves quickly, while it moves much slower through clay soil. The movement of water through soil layers is called the the wetting front. Our research concerns the motion of a liquid into an initially dry porous medium. Our work presents a theoretical framework for studying the physical interplay between a stationary wetting front of fractal dimension D with different porous materials. The aim was to model the mass geometry interplay by using the fractal dimension D of a stationary wetting front. The plane corresponding to the image is divided in several squares (the minimum correspond to the pixel size) of size length ". We acknowledge the help of Prof. M. García Velarde and the facilities offered by the Pluri-Disciplinary Institute of the Complutense University of Madrid. We also acknowledge the help of European Community under project Multi-scale complex fluid flows and interfacial phenomena (PITN-GA-2008-214919). Thanks are also due to ERCOFTAC (PELNoT, SIG 14)
Resumo:
In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.
Resumo:
The network mobility (NEMO) is proposed to support the mobility management when users move as a whole. In IP Multimedia Subsystem (IMS), the individual Quality of Service (QoS) control for NEMO results in excessive signaling cost. On the other hand, current QoS schemes have two drawbacks: unawareness of the heterogeneous wireless environment and inefficient utilization of the reserved bandwidth. To solve these problems, we present a novel heterogeneous bandwidth sharing (HBS) scheme for QoS provision under IMS-based NEMO (IMS-NEMO). The HBS scheme selects the most suitable access network for each session and enables the new coming non-real-time sessions to share bandwidth with the Variable Bit Rate (VBR) coded media flows. The modeling and simulation results demonstrate that the HBS can satisfy users' QoS requirement and obtain a more efficient use of the scarce wireless bandwidth.
Resumo:
We present ARGoS, a novel open source multi-robot simulator. The main design focus of ARGoS is the real-time simulation of large heterogeneous swarms of robots. Existing robot simulators obtain scalability by imposing limitations on their extensibility and on the accuracy of the robot models. By contrast, in ARGoS we pursue a deeply modular approach that allows the user both to easily add custom features and to allocate computational resources where needed by the experiment. A unique feature of ARGoS is the possibility to use multiple physics engines of different types and to assign them to different parts of the environment. Robots can migrate from one engine to another transparently. This feature enables entirely novel classes of optimizations to improve scalability and paves the way for a new approach to parallelism in robotics simulation. Results show that ARGoS can simulate about 10,000 simple wheeled robots 40% faster than real-time.
Resumo:
This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-election of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested on decentralized solution where the robots themselves autonomously and in an individual manner, are responsible of selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-tasks distribution problem and we propose a solution using two different approaches by applying Ant Colony Optimization-based deterministic algorithms as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithm, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.