961 resultados para Maximal topologies
Resumo:
The effectiveness of service provisioning in largescale networks is highly dependent on the number and location of service facilities deployed at various hosts. The classical, centralized approach to determining the latter would amount to formulating and solving the uncapacitated k-median (UKM) problem (if the requested number of facilities is fixed), or the uncapacitated facility location (UFL) problem (if the number of facilities is also to be optimized). Clearly, such centralized approaches require knowledge of global topological and demand information, and thus do not scale and are not practical for large networks. The key question posed and answered in this paper is the following: "How can we determine in a distributed and scalable manner the number and location of service facilities?" We propose an innovative approach in which topology and demand information is limited to neighborhoods, or balls of small radius around selected facilities, whereas demand information is captured implicitly for the remaining (remote) clients outside these neighborhoods, by mapping them to clients on the edge of the neighborhood; the ball radius regulates the trade-off between scalability and performance. We develop a scalable, distributed approach that answers our key question through an iterative reoptimization of the location and the number of facilities within such balls. We show that even for small values of the radius (1 or 2), our distributed approach achieves performance under various synthetic and real Internet topologies that is comparable to that of optimal, centralized approaches requiring full topology and demand information.
Resumo:
In a typical overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or content queries. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all its destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modeled by hop-counts, and nodes have potentially unbounded degrees. However, in practice, important constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's "best response" wiring strategy as a k-median problem on asymmetric distance, and use this formulation to obtain pure Nash equilibria. We experimentally examine the properties of such stable wirings on synthetic topologies, as well as on real topologies and maps constructed from PlanetLab and AS-level Internet measurements. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies.
Resumo:
Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.
Resumo:
This thesis is focused on the design and development of an integrated magnetic (IM) structure for use in high-power high-current power converters employed in renewable energy applications. These applications require low-cost, high efficiency and high-power density magnetic components and the use of IM structures can help achieve this goal. A novel CCTT-core split-winding integrated magnetic (CCTT IM) is presented in this thesis. This IM is optimized for use in high-power dc-dc converters. The CCTT IM design is an evolution of the traditional EE-core integrated magnetic (EE IM). The CCTT IM structure uses a split-winding configuration allowing for the reduction of external leakage inductance, which is a problem for many traditional IM designs, such as the EE IM. Magnetic poles are incorporated to help shape and contain the leakage flux within the core window. These magnetic poles have the added benefit of minimizing the winding power loss due to the airgap fringing flux as they shape the fringing flux away from the split-windings. A CCTT IM reluctance model is developed which uses fringing equations to accurately predict the most probable regions of fringing flux around the pole and winding sections of the device. This helps in the development of a more accurate model as it predicts the dc and ac inductance of the component. A CCTT IM design algorithm is developed which relies heavily on the reluctance model of the CCTT IM. The design algorithm is implemented using the mathematical software tool Mathematica. This algorithm is modular in structure and allows for the quick and easy design and prototyping of the CCTT IM. The algorithm allows for the investigation of the CCTT IM boxed volume with the variation of input current ripple, for different power ranges, magnetic materials and frequencies. A high-power 72 kW CCTT IM prototype is designed and developed for use in an automotive fuelcell-based drivetrain. The CCTT IM design algorithm is initially used to design the component while 3D and 2D finite element analysis (FEA) software is used to optimize the design. Low-cost and low-power loss ferrite 3C92 is used for its construction, and when combined with a low number of turns results in a very efficient design. A paper analysis is undertaken which compares the performance of the high-power CCTT IM design with that of two discrete inductors used in a two-phase (2L) interleaved converter. The 2L option consists of two discrete inductors constructed from high dc-bias material. Both topologies are designed for the same worst-case phase current ripple conditions and this ensures a like-for-like comparison. The comparison indicates that the total magnetic component boxed volume of both converters is similar while the CCTT IM has significantly lower power loss. Experimental results for the 72 kW, (155 V dc, 465 A dc input, 420 V dc output) prototype validate the CCTT IM concept where the component is shown to be 99.7 % efficient. The high-power experimental testing was conducted at General Motors advanced technology center in Torrence, Los Angeles. Calorific testing was used to determine the power loss in the CCTT IM component. Experimental 3.8 kW results and a 3.8 kW prototype compare and contrast the ferrite CCTT IM and high dc-bias 2L concepts over the typical operating range of a fuelcell under like-for-like conditions. The CCTT IM is shown to perform better than the 2L option over the entire power range. An 8 kW ferrite CCTT IM prototype is developed for use in photovoltaic (PV) applications. The CCTT IM is used in a boost pre-regulator as part of the PV power stage. The CCTT IM is compared with an industry standard 2L converter consisting of two discrete ferrite toroidal inductors. The magnetic components are compared for the same worst-case phase current ripple and the experimental testing is conducted over the operation of a PV panel. The prototype CCTT IM allows for a 50 % reduction in total boxed volume and mass in comparison to the baseline 2L option, while showing increased efficiency.
Resumo:
The analysis of energy detector systems is a well studied topic in the literature: numerous models have been derived describing the behaviour of single and multiple antenna architectures operating in a variety of radio environments. However, in many cases of interest, these models are not in a closed form and so their evaluation requires the use of numerical methods. In general, these are computationally expensive, which can cause difficulties in certain scenarios, such as in the optimisation of device parameters on low cost hardware. The problem becomes acute in situations where the signal to noise ratio is small and reliable detection is to be ensured or where the number of samples of the received signal is large. Furthermore, due to the analytic complexity of the models, further insight into the behaviour of various system parameters of interest is not readily apparent. In this thesis, an approximation based approach is taken towards the analysis of such systems. By focusing on the situations where exact analyses become complicated, and making a small number of astute simplifications to the underlying mathematical models, it is possible to derive novel, accurate and compact descriptions of system behaviour. Approximations are derived for the analysis of energy detectors with single and multiple antennae operating on additive white Gaussian noise (AWGN) and independent and identically distributed Rayleigh, Nakagami-m and Rice channels; in the multiple antenna case, approximations are derived for systems with maximal ratio combiner (MRC), equal gain combiner (EGC) and square law combiner (SLC) diversity. In each case, error bounds are derived describing the maximum error resulting from the use of the approximations. In addition, it is demonstrated that the derived approximations require fewer computations of simple functions than any of the exact models available in the literature. Consequently, the regions of applicability of the approximations directly complement the regions of applicability of the available exact models. Further novel approximations for other system parameters of interest, such as sample complexity, minimum detectable signal to noise ratio and diversity gain, are also derived. In the course of the analysis, a novel theorem describing the convergence of the chi square, noncentral chi square and gamma distributions towards the normal distribution is derived. The theorem describes a tight upper bound on the error resulting from the application of the central limit theorem to random variables of the aforementioned distributions and gives a much better description of the resulting error than existing Berry-Esseen type bounds. A second novel theorem, providing an upper bound on the maximum error resulting from the use of the central limit theorem to approximate the noncentral chi square distribution where the noncentrality parameter is a multiple of the number of degrees of freedom, is also derived.
Resumo:
This thesis is focused on the investigation of magnetic materials for high-power dcdc converters in hybrid and fuel cell vehicles and the development of an optimized high-power inductor for a multi-phase converter. The thesis introduces the power system architectures for hybrid and fuel cell vehicles. The requirements for power electronic converters are established and the dc-dc converter topologies of interest are introduced. A compact and efficient inductor is critical to reduce the overall cost, weight and volume of the dc-dc converter and optimize vehicle driving range and traction power. Firstly, materials suitable for a gapped CC-core inductor are analyzed and investigated. A novel inductor-design algorithm is developed and automated in order to compare and contrast the various magnetic materials over a range of frequencies and ripple ratios. The algorithm is developed for foil-wound inductors with gapped CC-cores in the low (10 kHz) to medium (30 kHz) frequency range and investigates the materials in a natural-convection-cooled environment. The practical effects of frequency, ripple, air-gap fringing, and thermal configuration are investigated next for the iron-based amorphous metal and 6.5 % silicon steel materials. A 2.5 kW converter is built to verify the optimum material selection and thermal configuration over the frequency range and ripple ratios of interest. Inductor size can increase in both of these laminated materials due to increased airgap fringing losses. Distributing the airgap is demonstrated to reduce the inductor losses and size but has practical limitations for iron-based amorphous metal cores. The effects of the manufacturing process are shown to degrade the iron-based amorphous metal multi-cut core loss. The experimental results also suggest that gap loss is not a significant consideration in these experiments. The predicted losses by the equation developed by Reuben Lee and cited by Colonel McLyman are significantly higher than the experimental results suggest. Iron-based amorphous metal has better preformance than 6.5 % silicon steel when a single cut core and natural-convection-cooling are used. Conduction cooling, rather than natural convection, can result in the highest power density inductor. The cooling for these laminated materials is very dependent on the direction of the lamination and the component mounting. Experimental results are produced showing the effects of lamination direction on the cooling path. A significant temperature reduction is demonstrated for conduction cooling versus natural-convection cooling. Iron-based amorphous metal and 6.5% silicon steel are competitive materials when conduction cooled. A novel inductor design algorithm is developed for foil-wound inductors with gapped CC-cores for conduction cooling of core and copper. Again, conduction cooling, rather than natural convection, is shown to reduce the size and weight of the inductor. The weight of the 6.5 % silicon steel inductor is reduced by around a factor of ten compared to natural-convection cooling due to the high thermal conductivity of the material. The conduction cooling algorithm is used to develop high-power custom inductors for use in a high power multi-phase boost converter. Finally, a high power digitally-controlled multi-phase boost converter system is designed and constructed to test the high-power inductors. The performance of the inductors is compared to the predictions used in the design process and very good correlation is achieved. The thesis results have been documented at IEEE APEC, PESC and IAS conferences in 2007 and at the IEEE EPE conference in 2008.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
We present a fiber-optic interferometric system for measuring depth-resolved scattering in two angular dimensions using Fourier-domain low-coherence interferometry. The system is a unique hybrid of the Michelson and Sagnac interferometer topologies. The collection arm of the interferometer is scanned in two dimensions to detect angular scattering from the sample, which can then be analyzed to determine the structure of the scatterers. A key feature of the system is the full control of polarization of both the illumination and the collection fields, allowing for polarization-sensitive detection, which is essential for two-dimensional angular measurements. System performance is demonstrated using a double-layer microsphere phantom. Experimental data from samples with different sizes and acquired with different polarizations show excellent agreement with Mie theory, producing structural measurements with subwavelength accuracy.
Resumo:
We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and impose a "penalty" that punishes violations of nonanticipativity. In applications, the hope is that this relaxed version of the problem will be simpler to solve than the original dynamic program. The upper bounds provided by this dual approach complement lower bounds on values that may be found by simulating with heuristic policies. We describe the theory underlying this dual approach and establish weak duality, strong duality, and complementary slackness results that are analogous to the duality results of linear programming. We also study properties of good penalties. Finally, we demonstrate the use of this dual approach in an adaptive inventory control problem with an unknown and changing demand distribution and in valuing options with stochastic volatilities and interest rates. These are complex problems of significant practical interest that are quite difficult to solve to optimality. In these examples, our dual approach requires relatively little additional computation and leads to tight bounds on the optimal values. © 2010 INFORMS.
Resumo:
BACKGROUND: L-arginine infusion improves endothelial function in malaria but its safety profile has not been described in detail. We assessed clinical symptoms, hemodynamic status and biochemical parameters before and after a single L-arginine infusion in adults with moderately severe malaria. METHODOLOGY AND FINDINGS: In an ascending dose study, adjunctive intravenous L-arginine hydrochloride was infused over 30 minutes in doses of 3 g, 6 g and 12 g to three separate groups of 10 adults hospitalized with moderately severe Plasmodium falciparum malaria in addition to standard quinine therapy. Symptoms, vital signs and selected biochemical measurements were assessed before, during, and for 24 hours after infusion. No new or worsening symptoms developed apart from mild discomfort at the intravenous cannula site in two patients. There was a dose-response relationship between increasing mg/kg dose and the maximum decrease in systolic (rho = 0.463; Spearman's, p = 0.02) and diastolic blood pressure (r = 0.42; Pearson's, p = 0.02), and with the maximum increment in blood potassium (r = 0.70, p<0.001) and maximum decrement in bicarbonate concentrations (r = 0.53, p = 0.003) and pH (r = 0.48, p = 0.007). At the highest dose (12 g), changes in blood pressure and electrolytes were not clinically significant, with a mean maximum decrease in mean arterial blood pressure of 6 mmHg (range: 0-11; p<0.001), mean maximal increase in potassium of 0.5 mmol/L (range 0.2-0.7 mmol/L; p<0.001), and mean maximal decrease in bicarbonate of 3 mEq/L (range 1-7; p<0.01) without a significant change in pH. There was no significant dose-response relationship with blood phosphate, lactate, anion gap and glucose concentrations. All patients had an uncomplicated clinical recovery. CONCLUSIONS/SIGNIFICANCE: Infusion of up to 12 g of intravenous L-arginine hydrochloride over 30 minutes is well tolerated in adults with moderately severe malaria, with no clinically important changes in hemodynamic or biochemical status. Trials of adjunctive L-arginine can be extended to phase 2 studies in severe malaria. TRIAL REGISTRATION: ClinicalTrials.gov NCT00147368.
Resumo:
BACKGROUND: Individuals without prior immunity to a vaccine vector may be more sensitive to reactions following injection, but may also show optimal immune responses to vaccine antigens. To assess safety and maximal tolerated dose of an adenoviral vaccine vector in volunteers without prior immunity, we evaluated a recombinant replication-defective adenovirus type 5 (rAd5) vaccine expressing HIV-1 Gag, Pol, and multiclade Env proteins, VRC-HIVADV014-00-VP, in a randomized, double-blind, dose-escalation, multicenter trial (HVTN study 054) in HIV-1-seronegative participants without detectable neutralizing antibodies (nAb) to the vector. As secondary outcomes, we also assessed T-cell and antibody responses. METHODOLOGY/PRINCIPAL FINDINGS: Volunteers received one dose of vaccine at either 10(10) or 10(11) adenovector particle units, or placebo. T-cell responses were measured against pools of global potential T-cell epitope peptides. HIV-1 binding and neutralizing antibodies were assessed. Systemic reactogenicity was greater at the higher dose, but the vaccine was well tolerated at both doses. Although no HIV infections occurred, commercial diagnostic assays were positive in 87% of vaccinees one year after vaccination. More than 85% of vaccinees developed HIV-1-specific T-cell responses detected by IFN-γ ELISpot and ICS assays at day 28. T-cell responses were: CD8-biased; evenly distributed across the three HIV-1 antigens; not substantially increased at the higher dose; and detected at similar frequencies one year following injection. The vaccine induced binding antibodies against at least one HIV-1 Env antigen in all recipients. CONCLUSIONS/SIGNIFICANCE: This vaccine appeared safe and was highly immunogenic following a single dose in human volunteers without prior nAb against the vector. TRIAL REGISTRATION: ClinicalTrials.gov NCT00119873.
Resumo:
ct: We introduce a new concept for stimulated-Brillouin-scattering-based slow light in optical fibers that is applicable for broadly-tunable frequency-swept sources. It allows slow light to be achieved, in principle, over the entire transparency window of the optical fiber. We demonstrate a slow light delay of 10 ns at 1.55 μm using a 10-m-long photonic crystal fiber with a source sweep rate of 400 MHz/μs and a pump power of 200 mW. We also show that there exists a maximal delay obtainable by this method, which is set by the SBS threshold, independent of sweep rate. For our fiber with optimum length, this maximum delay is ~38 ns, obtained for a pump power of 760 mW.
Resumo:
To assess the effect of targeted myocardial beta-adrenergic receptor (AR) stimulation on relaxation and phospholamban regulation, we studied the physiological and biochemical alterations associated with overexpression of the human beta2-AR gene in transgenic mice. These mice have an approximately 200-fold increase in beta-AR density and a 2-fold increase in basal adenylyl cyclase activity relative to negative littermate controls. Mice were catheterized with a high fidelity micromanometer and hemodynamic recordings were obtained in vivo. Overexpression of the beta2-AR altered parameters of relaxation. At baseline, LV dP/dt(min) and the time constant of LV pressure isovolumic decay (Tau) in the transgenic mice were significantly shorter compared with controls, indicating markedly enhanced myocardial relaxation. Isoproterenol stimulation resulted in shortening of relaxation velocity in control mice but not in the transgenic mice, indicating maximal relaxation in these animals. Immunoblotting analysis revealed a selective decrease in the amount of phospholamban protein, without a significant change in the content for either sarcoplasmic reticulum Ca2+ ATPase or calsequestrin, in the transgenic hearts compared with controls. This study indicates that myocardial relaxation is both markedly enhanced and maximal in these mice and that conditions associated with chronic beta-AR stimulation can result in a selective reduction of phospholamban protein.
Resumo:
Human lymphocytes are known to posessess a catecholamine-responsive adenylate cyclase which has typical beta-adrenergic specificity. To identify directly and to quantitate these beta-adenergic receptors in human lymphocytes, (-) [3H] alprenolol, a potent beta-adrenergic antagonist, was used to label binding sites in homogenates of human mononuclear leukocytes. Binding of (-) [3H] alprenolol to these sites demonstrated the kinetics, affinity, and stereospecificity expected of binding to adenylate cyclase-coupled beta-adrenergic receptors. Binding was rapid (t1/2 less than 30 s) and rapidly reversible (t1/2 less than 3 min) at 37 degrees C. Binding was a saturable process with 75 +/- 12 fmol (-) [3H] alprenolol bound/mg protein (mean +/- SEM) at saturation, corresponding to about 2,000 sites/cell. Half-maximal saturation occurred at 10 nM (-) [3H] alprenolol, which provides an estimate of the dissociation constant of (-) [3H] alprenolol for the beta-adrenergic receptor. The beta-adrenergic antagonist, (-) propranolol, potently competed for the binding sites, causing half-maximal inhibition of binding at 9 nM. beta-Adrenergic agonists also competed for the binding sites. The order of potency was (-) isoproterenol greater than (-) epinephrine greater than (-)-norepinephrine which agreed with the order of potency of these agents in stimulating leukocyte adenylate cyclase. Dissociation constants computed from binding experiments were virtually identical to those obtained from adenylate cyclase activation studies. Marked stereospecificity was observed for both binding and activation of adenylate cyclase. (-)Stereoisomers of beta-adrenergic agonists and antagonists were 9- to 300-fold more potent than their corresponding (+) stereoisomers. Structurally related compounds devoid of beta-adrenergic activity such as dopamine, dihydroxymandelic acid, normetanephrine, pyrocatechol, and phentolamine did not effectively compete for the binding sites. (-) [3H] alprenolol binding to human mononuclear leukocyte preparations was almost entirely accounted for by binding to small lymphocytes, the predominant cell type in the preparations. No binding was detectable to human erythrocytes. These results demonstrate the feasibility of using direct binding methods to study beta-adrenergic receptors in a human tissue. They also provide an experimental approach to the study of states of altered sensitivity to catecholamines at the receptor level in man.
Resumo:
The alpha 1B-adrenergic receptor (alpha 1B-ADR) is a member of the G-protein-coupled family of transmembrane receptors. When transfected into Rat-1 and NIH 3T3 fibroblasts, this receptor induces focus formation in an agonist-dependent manner. Focus-derived, transformed fibroblasts exhibit high levels of functional alpha 1B-ADR expression, demonstrate a catecholamine-induced enhancement in the rate of cellular proliferation, and are tumorigenic when injected into nude mice. Induction of neoplastic transformation by the alpha 1B-ADR, therefore, identifies this normal cellular gene as a protooncogene. Mutational alteration of this receptor can lead to activation of this protooncogene, resulting in an enhanced ability of agonist to induce focus formation with a decreased latency and quantitative increase in transformed foci. In contrast to cells expressing the wild-type alpha 1B-ADR, focus formation in "oncomutant"-expressing cell lines appears constitutively activated with the generation of foci in unstimulated cells. Further, these cell lines exhibit near-maximal rates of proliferation even in the absence of catecholamine supplementation. They also demonstrate an enhanced ability for tumor generation in nude mice with a decreased period of latency compared with cells expressing the wild-type receptor. Thus, the alpha 1B-ADR gene can, when overexpressed and activated, function as an oncogene inducing neoplastic transformation. Mutational alteration of this receptor gene can result in the activation of this protooncogene, enhancing its oncogenic potential. These findings suggest that analogous spontaneously occurring mutations in this class of receptor proteins could play a key role in the induction or progression of neoplastic transformation and atherosclerosis.