907 resultados para Strut-and Tie Model
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^
Resumo:
This study examined links between adolescent depressive symptoms, actual pubertal development, perceived pubertal timing relative to one’s peers, adolescent-maternal relationship satisfaction, and couple sexual behavior. Assessments of these variables were made on each couple member separately and then these variables were used to predict the sexual activity of the couple. Participants were drawn from the National Longitudinal Study of Adolescent Health (Add Health; Bearman et al., 1997; Udry, 1997) data set (N = 20,088; aged 12-18 years). Dimensions of adolescent romantic experiences using the total sample were described and then a subsample of romantically paired adolescents (n = 1,252) were used to test a risk and protective model for predicting couple sexual behavior using the factors noted above. Relevant measures from the Wave 1 Add Health measures were used. Most of the items used in Add Health to assess romantic relationship experiences, adolescent depressive symptoms, pubertal development (actual and perceived), adolescent-maternal relationship satisfaction, and couple sexual behavior were drawn from other national surveys or from scales with well documented psychometric properties. Results demonstrated that romantic relationships are part of most adolescents’ lives and that adolescents’ experiences with these relationships differ markedly by age, sex, and race/ethnicity. Further, each respective couple member’s pubertal development, perceived pubertal timing, and maternal relationship satisfaction were useful in predicting sexual risk-promoting and risk-reducing behaviors in adolescent romantic couples. Findings in this dissertation represent an initial step toward evaluating explanatory models of adolescent couple sexual behavior.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.
Resumo:
El Niño and the Southern Oscillation (ENSO) is a cycle that is initiated in the equatorial Pacific Ocean and is recognized on interannual timescales by oscillating patterns in tropical Pacific sea surface temperatures (SST) and atmospheric circulations. Using correlation and regression analysis of datasets that include SST’s and other interdependent variables including precipitation, surface winds, sea level pressure, this research seeks to quantify recent changes in ENSO behavior. Specifically, the amplitude, frequency of occurrence, and spatial characteristics (i.e. events with maximum amplitude in the Central Pacific versus the Eastern Pacific) are investigated. The research is based on the question; “Are the statistics of ENSO changing due to increasing greenhouse gas concentrations?” Our hypothesis is that the present-day changes in amplitude, frequency, and spatial characteristics of ENSO are determined by the natural variability of the ocean-atmosphere climate system, not the observed changes in the radiative forcing due to change in the concentrations of greenhouse gases. Statistical analysis, including correlation and regression analysis, is performed on observational ocean and atmospheric datasets available from the National Oceanographic and Atmospheric Administration (NOAA), National Center for Atmospheric Research (NCAR) and coupled model simulations from the Coupled Model Inter-comparison Project (phase 5, CMIP5). Datasets are analyzed with a particular focus on ENSO over the last thirty years. Understanding the observed changes in the ENSO phenomenon over recent decades has a worldwide significance. ENSO is the largest climate signal on timescales of 2 - 7 years and affects billions of people via atmospheric teleconnections that originate in the tropical Pacific. These teleconnections explain why changes in ENSO can lead to climate variations in areas including North and South America, Asia, and Australia. For the United States, El Niño events are linked to decreased number of hurricanes in the Atlantic basin, reduction in precipitation in the Pacific Northwest, and increased precipitation throughout the southern United Stated during winter months. Understanding variability in the amplitude, frequency, and spatial characteristics of ENSO is crucial for decision makers who must adapt where regional ecology and agriculture are affected by ENSO.
Resumo:
El Niño and the Southern Oscillation (ENSO) is a cycle that is initiated in the equatorial Pacific Ocean and is recognized on interannual timescales by oscillating patterns in tropical Pacific sea surface temperatures (SST) and atmospheric circulations. Using correlation and regression analysis of datasets that include SST’s and other interdependent variables including precipitation, surface winds, sea level pressure, this research seeks to quantify recent changes in ENSO behavior. Specifically, the amplitude, frequency of occurrence, and spatial characteristics (i.e. events with maximum amplitude in the Central Pacific versus the Eastern Pacific) are investigated. The research is based on the question; “Are the statistics of ENSO changing due to increasing greenhouse gas concentrations?” Our hypothesis is that the present-day changes in amplitude, frequency, and spatial characteristics of ENSO are determined by the natural variability of the ocean-atmosphere climate system, not the observed changes in the radiative forcing due to change in the concentrations of greenhouse gases. Statistical analysis, including correlation and regression analysis, is performed on observational ocean and atmospheric datasets available from the National Oceanographic and Atmospheric Administration (NOAA), National Center for Atmospheric Research (NCAR) and coupled model simulations from the Coupled Model Inter-comparison Project (phase 5, CMIP5). Datasets are analyzed with a particular focus on ENSO over the last thirty years. Understanding the observed changes in the ENSO phenomenon over recent decades has a worldwide significance. ENSO is the largest climate signal on timescales of 2 - 7 years and affects billions of people via atmospheric teleconnections that originate in the tropical Pacific. These teleconnections explain why changes in ENSO can lead to climate variations in areas including North and South America, Asia, and Australia. For the United States, El Niño events are linked to decreased number of hurricanes in the Atlantic basin, reduction in precipitation in the Pacific Northwest, and increased precipitation throughout the southern United Stated during winter months. Understanding variability in the amplitude, frequency, and spatial characteristics of ENSO is crucial for decision makers who must adapt where regional ecology and agriculture are affected by ENSO.
Resumo:
In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.
Resumo:
Antarctic krill (Euphausia superba), a key species of Southern Ocean food webs plays a central role in ecosystem processes, community dynamics of apex predators and as a commercial fishery target. A decline in krill abundance during the late 20th century in the SW Atlantic sector has been linked to a concomitant decrease in sea ice, based on the hypothesis that sea ice acts as a feeding ground for overwintering larvae. However, evidence supporting this hypothesis has been scarce due to logistical challenges of collecting data in austral winter. Here we report on a winter study that involved diver observations of larval krill in their under-ice environment, ship-based studies of krill, sea ice physical characteristics, and biophysical model analyses of krill-ocean-ice interactions. We present evidence that complex under-ice topography is vital for larval krill in terms of dispersal and advection into high productive nursery habitats, rather than the provision by the ice environment of food. Further, ongoing changes in sea ice will lead to increases in sea-ice regimes favourable for overwintering larval krill but shifting southwards. This will result in ice-free conditions in the SW Atlantic, which will be conducive for enhancing food supplies due to sufficient light and iron availability, thus enhancing larvae development and growth. However, the associated impact on dispersal and advection may lead to a net shift in krill from the SW Atlantic to regions further east by the eastward flowing ACC and the northern branch of the Weddell Gyre, with profound consequences for the Southern Ocean pelagic ecosystem.
Resumo:
Limit-periodic (LP) structures exhibit a type of nonperiodic order yet to be found in a natural material. A recent result in tiling theory, however, has shown that LP order can spontaneously emerge in a two-dimensional (2D) lattice model with nearest-and next-nearest-neighbor interactions. In this dissertation, we explore the question of what types of interactions can lead to a LP state and address the issue of whether the formation of a LP structure in experiments is possible. We study emergence of LP order in three-dimensional (3D) tiling models and bring the subject into the physical realm by investigating systems with realistic Hamiltonians and low energy LP states. Finally, we present studies of the vibrational modes of a simple LP ball and spring model whose results indicate that LP materials would exhibit novel physical properties.
A 2D lattice model defined on a triangular lattice with nearest- and next-nearest-neighbor interactions based on the Taylor-Socolar (TS) monotile is known to have a LP ground state. The system reaches that state during a slow quench through an infinite sequence of phase transitions. Surprisingly, even when the strength of the next-nearest-neighbor interactions is zero, in which case there is a large degenerate class of both crystalline and LP ground states, a slow quench yields the LP state. The first study in this dissertation introduces 3D models closely related to the 2D models that exhibit LP phases. The particular 3D models were designed such that next-nearest-neighbor interactions of the TS type are implemented using only nearest-neighbor interactions. For one of the 3D models, we show that the phase transitions are first order, with equilibrium structures that can be more complex than in the 2D case.
In the second study, we investigate systems with physical Hamiltonians based on one of the 2D tiling models with the goal of stimulating attempts to create a LP structure in experiments. We explore physically realizable particle designs while being mindful of particular features that may make the assembly of a LP structure in an experimental system difficult. Through Monte Carlo (MC) simulations, we have found that one particle design in particular is a promising template for a physical particle; a 2D system of identical disks with embedded dipoles is observed to undergo the series of phase transitions which leads to the LP state.
LP structures are well ordered but nonperiodic, and hence have nontrivial vibrational modes. In the third section of this dissertation, we study a ball and spring model with a LP pattern of spring stiffnesses and identify a set of extended modes with arbitrarily low participation ratios, a situation that appears to be unique to LP systems. The balls that oscillate with large amplitude in these modes live on periodic nets with arbitrarily large lattice constants. By studying periodic approximants to the LP structure, we present numerical evidence for the existence of such modes, and we give a heuristic explanation of their structure.
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
Existing studies on mutual recognition agreements (MRAs) are mostly based on the European experience. In this paper, we will examine the ongoing attempts to establish a mutual recognition architecture in the Association of Southeast Asian Nations (ASEAN) and seek to explain the region's unique approach to MRAs, which can be classified as a "hub and spoke" model of mutual recognition. On one hand, ASEAN is attempting to establish a quasi-supranational ASEAN-level mechanism to confer "ASEAN qualification" effective in the entire ASEAN region. On the other hand, ASEAN MRAs respect members' national sovereignty, and it is national authorities, not ASEAN institutions, who have the ultimate power to approve or disapprove the supply of services by ASEAN qualification holders. Such a mixed approach to mutual recognition can be best understood as a centralized mechanism for learning-by-doing, rather than centralized recognition per se.
Resumo:
The majority of benthic marine invertebrates have a complex life cycle, during which the pelagic larvae select a suitable substrate, attach to it, and then metamorphose into benthic adults. Anthropogenic ocean acidification (OA) is postulated to affect larval metamorphic success through an altered protein expression pattern (proteome structure) and post-translational modifications. To test this hypothesis, larvae of an economically and ecologically important barnacle species Balanus amphitrite, were cultured from nauplius to the cyprid stage in the present (control) and in the projected elevated concentrations of CO2 for the year 2100 (the OA treatment). Cyprid response to OA was analyzed at the total proteome level as well as two protein post-translational modification (phosphorylation and glycosylation) levels using a 2-DE based proteomic approach. The cyprid proteome showed OA-driven changes. Proteins that were differentially up or down regulated by OA come from three major groups, namely those related to energy-metabolism, respiration, and molecular chaperones, illustrating a potential strategy that the barnacle larvae may employ to tolerate OA stress. The differentially expressed proteins were tentatively identified as OA-responsive, effectively creating unique protein expression signatures for OA scenario of 2100. This study showed the promise of using a sentinel and non-model species to examine the impact of OA at the proteome level.
Resumo:
In this study, we present the winter time surface energy balance at a polygonal tundra site in northern Siberia based on independent measurements of the net radiation, the sensible heat flux and the ground heat flux from two winter seasons. The latent heat flux is inferred from measurements of the atmospheric turbulence characteristics and a model approach. The long-wave radiation is found to be the dominant factor in the surface energy balance. The radiative losses are balanced to about 60 % by the ground heat flux and almost 40 % by the sensible heat fluxes, whereas the contribution of the latent heat flux is small. The main controlling factors of the surface energy budget are the snow cover, the cloudiness and the soil temperature gradient. Large spatial differences in the surface energy balance are observed between tundra soils and a small pond. The ground heat flux released at a freezing pond is by a factor of two higher compared to the freezing soil, whereas large differences in net radiation between the pond and soil are only observed at the end of the winter period. Differences in the surface energy balance between the two winter seasons are found to be related to differences in snow depth and cloud cover which strongly affect the temperature evolution and the freeze-up at the investigated pond.
Resumo:
This chapter addresses the issue of language standardization from two perspectives, bringing together a theoretical perspective offered by the discipline of sociolinguistics with a practical example from international business. We introduce the broad concept of standardization and embed the study of language standardization in the wider discussion of standards as a means of control across society. We analyse the language policy and practice of the Danish multinational, Grundfos, and use it as a “sociolinguistic laboratory” to “test” the theory of language standardization initially elaborated by Einar Haugen to explain the history of modern Norwegian. The table is then turned and a model from International Business by Piekkari, Welch and Welch is used to illuminate recent Norwegian language planning. It is found that the Grundfos case works well with the Haugen model, and the International Business model provides a valuable practical lesson for national language planners, both showing that a “comparative standardology” is a valuable undertaking. More voices “at the table” will allow both theory and practice to be further refined and for the role of standards across society to be better understood.