23 resultados para Alternative space

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.

Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.

We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.

We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.

The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.

If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.

The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.

If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM) by the dual norm. The projective bounds of a norm and its dual are equal.

The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.

In all results, the real and complex cases are handled in a completely parallel fashion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proliferation of smartphones and other internet-enabled, sensor-equipped consumer devices enables us to sense and act upon the physical environment in unprecedented ways. This thesis considers Community Sense-and-Response (CSR) systems, a new class of web application for acting on sensory data gathered from participants' personal smart devices. The thesis describes how rare events can be reliably detected using a decentralized anomaly detection architecture that performs client-side anomaly detection and server-side event detection. After analyzing this decentralized anomaly detection approach, the thesis describes how weak but spatially structured events can be detected, despite significant noise, when the events have a sparse representation in an alternative basis. Finally, the thesis describes how the statistical models needed for client-side anomaly detection may be learned efficiently, using limited space, via coresets.

The Caltech Community Seismic Network (CSN) is a prototypical example of a CSR system that harnesses accelerometers in volunteers' smartphones and consumer electronics. Using CSN, this thesis presents the systems and algorithmic techniques to design, build and evaluate a scalable network for real-time awareness of spatial phenomena such as dangerous earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most space applications require deployable structures due to the limiting size of current launch vehicles. Specifically, payloads in nanosatellites such as CubeSats require very high compaction ratios due to the very limited space available in this typo of platform. Strain-energy-storing deployable structures can be suitable for these applications, but the curvature to which these structures can be folded is limited to the elastic range. Thanks to fiber microbuckling, high-strain composite materials can be folded into much higher curvatures without showing significant damage, which makes them suitable for very high compaction deployable structure applications. However, in applications that require carrying loads in compression, fiber microbuckling also dominates the strength of the material. A good understanding of the strength in compression of high-strain composites is then needed to determine how suitable they are for this type of application.

The goal of this thesis is to investigate, experimentally and numerically, the microbuckling in compression of high-strain composites. Particularly, the behavior in compression of unidirectional carbon fiber reinforced silicone rods (CFRS) is studied. Experimental testing of the compression failure of CFRS rods showed a higher strength in compression than the strength estimated by analytical models, which is unusual in standard polymer composites. This effect, first discovered in the present research, was attributed to the variation in random carbon fiber angles respect to the nominal direction. This is an important effect, as it implies that microbuckling strength might be increased by controlling the fiber angles. With a higher microbuckling strength, high-strain materials could carry loads in compression without reaching microbuckling and therefore be suitable for several space applications.

A finite element model was developed to predict the homogenized stiffness of the CFRS, and the homogenization results were used in another finite element model that simulated a homogenized rod under axial compression. A statistical representation of the fiber angles was implemented in the model. The presence of fiber angles increased the longitudinal shear stiffness of the material, resulting in a higher strength in compression. The simulations showed a large increase of the strength in compression for lower values of the standard deviation of the fiber angle, and a slight decrease of strength in compression for lower values of the mean fiber angle. The strength observed in the experiments was achieved with the minimum local angle standard deviation observed in the CFRS rods, whereas the shear stiffness measured in torsion tests was achieved with the overall fiber angle distribution observed in the CFRS rods.

High strain composites exhibit good bending capabilities, but they tend to be soft out-of-plane. To achieve a higher out-of-plane stiffness, the concept of dual-matrix composites is introduced. Dual-matrix composites are foldable composites which are soft in the crease regions and stiff elsewhere. Previous attempts to fabricate continuous dual-matrix fiber composite shells had limited performance due to excessive resin flow and matrix mixing. An alternative method, presented in this thesis uses UV-cure silicone and fiberglass to avoid these problems. Preliminary experiments on the effect of folding on the out-of-plane stiffness are presented. An application to a conical log-periodic antenna for CubeSats is proposed, using origami-inspired stowing schemes, that allow a conical dual-matrix composite shell to reach very high compaction ratios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.

Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.

The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.

Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.

Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Alternative scaffolds are non-antibody proteins that can be engineered to bind new targets. They have found useful niches in the therapeutic space due to their smaller size and the ease with which they can be engineered to be bispecific. We sought a new scaffold that could be used for therapeutic ends and chose the C2 discoidin domain of factor VIII, which is well studied and of human origin. Using yeast surface display, we engineered the C2 domain to bind to αvβ3 integrin with a 16 nM affinity while retaining its thermal stability and monomeric nature. We obtained a crystal structure of the engineered domain at 2.1 Å resolution. We have christened this discoidin domain alternative scaffold the “discobody.”

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.

The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.

The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.

To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Close to equilibrium, a normal Bose or Fermi fluid can be described by an exact kinetic equation whose kernel is nonlocal in space and time. The general expression derived for the kernel is evaluated to second order in the interparticle potential. The result is a wavevector- and frequency-dependent generalization of the linear Uehling-Uhlenbeck kernel with the Born approximation cross section.

The theory is formulated in terms of second-quantized phase space operators whose equilibrium averages are the n-particle Wigner distribution functions. Convenient expressions for the commutators and anticommutators of the phase space operators are obtained. The two-particle equilibrium distribution function is analyzed in terms of momentum-dependent quantum generalizations of the classical pair distribution function h(k) and direct correlation function c(k). The kinetic equation is presented as the equation of motion of a two -particle correlation function, the phase space density-density anticommutator, and is derived by a formal closure of the quantum BBGKY hierarchy. An alternative derivation using a projection operator is also given. It is shown that the method used for approximating the kernel by a second order expansion preserves all the sum rules to the same order, and that the second-order kernel satisfies the appropriate positivity and symmetry conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DC and transient measurements of space-charge-limited currents through alloyed and symmetrical n^+ν n^+ structures made of nominally 75 kΩcm ν-type silicon are studied before and after the introduction of defects by 14 MeV neutron radiation. In the transient measurements, the current response to a large turn-on voltage step is analyzed. Right after the voltage step is applied, the current transient reaches a value which we shall call "initial current" value. At longer times, the transient current decays from the initial current value if traps are present.

Before the irradiation, the initial current density-voltage characteristics J(V) agree quantitatively with the theory of trap-free space-charge-limited current in solids. We obtain for the electron mobility a temperature dependence which indicates that scattering due to impurities is weak. This is expected for the high purity silicon used. The drift velocity-field relationships for electrons at room temperature and 77°K, derived from the initial current density-voltage characteristics, are shown to fit the relationships obtained with other methods by other workers. The transient current response for t > 0 remains practically constant at the initial value, thus indicating negligible trapping.

Measurement of the initial (trap-free) current density-voltage characteristics after the irradiation indicates that the drift velocity-field relationship of electrons in silicon is affected by the radiation only at low temperature in the low field range. The effect is not sufficiently pronounced to be readily analyzed and no formal description of it is offered. In the transient response after irradiation for t > 0, the current decays from its initial value, thus revealing the presence of traps. To study these traps, in addition to transient measurements, the DC current characteristics were measured and shown to follow the theory of trap-dominated space-charge-limited current in solids. This theory was applied to a model consisting of two discrete levels in the forbidden band gap. Calculations and experiments agreed and the capture cross-sections of the trapping levels were obtained. This is the first experimental case known to us through which the flow of space-charge-limited current is so simply representable.

These results demonstrate the sensitivity of space-charge-limited current flow as a tool to detect traps and changes in the drift velocity-field relationship of carriers caused by radiation. They also establish that devices based on the mode of space-charge-limited current flow will be affected considerably by any type of radiation capable of introducing traps. This point has generally been overlooked so far, but is obviously quite significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.

Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.

It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.

A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.

Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.