946 resultados para General circulation models
Resumo:
This study examines a matrix of synthetic water samples designed to include conditions that favour brominated disinfection by-product (Br-DBP) formation, in order to provide predictive models suitable for high Br-DBP forming waters such as salinity-impacted waters. Br-DBPs are known to be more toxic than their chlorinated analogues, in general, and their formation may be favoured by routine water treatment practices such as coagulation/flocculation under specific conditions; therefore, circumstances surrounding their formation must be understood. The chosen factors were bromide concentration, mineral alkalinity, bromide to dissolved organic carbon (Br/DOC) ratio and Suwannee River natural organic matter concentration. The relationships between these parameters and DBP formation were evaluated by response surface modelling of data generated using a face-centred central composite experimental design. Predictive models for ten brominated and/or chlorinated DBPs are presented, as well as models for total trihalomethanes (tTHMs) and total dihaloacetonitriles (tDHANs), and bromide substitution factors for the THMs and DHANs classes. The relationships described revealed that increasing alkalinity and increasing Br/DOC ratio were associated with increasing bromination of THMs and DHANs, suggesting that DOC lowering treatment methods that do not also remove bromide such as enhanced coagulation may create optimal conditions for Br-DBP formation in waters in which bromide is present.
Resumo:
Stability analyses have been widely used to better understand the mechanism of traffic jam formation. In this paper, we consider the impact of cooperative systems (a.k.a. connected vehicles) on traffic dynamics and, more precisely, on flow stability. Cooperative systems are emerging technologies enabling communication between vehicles and/or with the infrastructure. In a distributed communication framework, equipped vehicles are able to send and receive information to/from other equipped vehicles. Here, the effects of cooperative traffic are modeled through a general bilateral multianticipative car-following law that improves cooperative drivers' perception of their surrounding traffic conditions within a given communication range. Linear stability analyses are performed for a broad class of car-following models. They point out different stability conditions in both multianticipative and nonmultianticipative situations. To better understand what happens in unstable conditions, information on the shock wave structure is studied in the weakly nonlinear regime by the mean of the reductive perturbation method. The shock wave equation is obtained for generic car-following models by deriving the Korteweg de Vries equations. We then derive traffic-state-dependent conditions for the sign of the solitary wave (soliton) amplitude. This analytical result is verified through simulations. Simulation results confirm the validity of the speed estimate. The variation of the soliton amplitude as a function of the communication range is provided. The performed linear and weakly nonlinear analyses help justify the potential benefits of vehicle-integrated communication systems and provide new insights supporting the future implementation of cooperative systems.
Resumo:
Background The Circle of Willis (CoW) is the most important collateral pathway of the cerebral artery. The present study aims to investigate the collateral capacity of CoW with anatomical variation when unilateral internalcarotid artery (ICA) is occluded. Methods Basing on MRI data, we have reconstructed eight 3D models with variations in the posterior circulation of the CoW and set four different degrees of stenosis in the right ICA, namely 24%, 43%, 64% and 79%, respectively. Finally, a total of 40 models are performed with computational fluid dynamics simulations. All of the simulations share the same boundary condition with static pressure and the volume flow rate (VFR) are obtained to evaluate their collateral capacity. Results As for the middle cerebral artery (MCA) and the anterior cerebral artery (ACA), the transitional-type model possesses the best collateral capacity. But for the posterior cerebral artery (PCA), unilateral stenosis of ICA has the weakest influence on the unilateral posterior communicating artery (PCoA) absent model. We also find that the full fetal-type posterior circle of Willis is an utmost dangerous variation which must be paid more attention. Conclusion The results demonstrate that different models have different collateral capacities in coping stenosis of unilateral ICA and these differences can be reflected by different outlets. The study could be used as a reference for neurosurgeon in choosing the best treatment strategy.
Resumo:
The relations for the inner layer potential &fference (E) in the presence of adsorbed orgamc molecules are derived for three hterarchlcal models, m terms of molecular constants like permanent &pole moments, polarlzablhtles, etc It is shown how the experimentally observed patterns of the E vs 0 plots (hnear m all ranges of $\sigma^M$, non-linear in one or both regions of o M, etc ) can be understood in a serm-quantltatlve manner from the simplest model in our hierarchy, viz the two-state site panty version Two-state multi-site and three-state (sxte panty) models are also analysed and the slope (3E/80),,M tabulated for these also The results for the Esm-Markov effect are denved for all the models and compared with the earlier result of Parsons. A comparison with the GSL phenomenologlcal equation is presented and its molecular basis, as well as the hmltatlons, is analysed. In partxcular, two-state multa-slte and three-state (site panty) models yield E-o M relations that are more general than the "umfied" GSL equation The posslblhty of vaewlng the compact layer as a "composite medium" with an "effective dlelectnc constant" and obtaimng novel phenomenological descnptions IS also indicated.
Resumo:
The pervasive use of the World Wide Web by the general population has created a cultural shift throughout the world. It has enabled more people to share more information about more events and issues than was possible before its general use. As a consequence, it has transformed traditional news media’s approach to almost every aspect of journalism, with many organisations restructuring their philosophy and practice to include a variety of participatory spaces/forums where people are free to engage in deliberative dialogue about matters of public importance. This paper draws from an international collective case study that showcases various approaches to participatory online news journalism during the period 1997–2011 (Adams, 2013). The research finds differences in the ways in which public service, commercial, and independent news media give voice to the public, and ultimately in their approach to journalism’s role as the Fourth Estate––one of the key institutions of democracy. The work is framed by the notion that journalism in democratic societies has a crucial role in ensuring citizens are informed and engaged with public affairs. An examination of four media models, OhmyNews International, News Corp Australia (formerly News Limited), the Guardian and the British Broadcasting Corporation (BBC), showcases the various approaches to participatory online news journalism and how each provides different avenues for citizen engagement. Semistructured in-depth interviews with some of the key senior journalists and editors provide specific information on comparisons between the distinctive practices in each of their employer organisations.
Resumo:
This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.
Resumo:
We compared daily net radiation (Rn) estimates from 19 methods with the ASCE-EWRI Rn estimates in two climates: Clay Center, Nebraska (sub-humid) and Davis, California (semi-arid) for the calendar year. The performances of all 20 methods, including the ASCE-EWRI Rn method, were then evaluated against Rn data measured over a non-stressed maize canopy during two growing seasons in 2005 and 2006 at Clay Center. Methods differ in terms of inputs, structure, and equation intricacy. Most methods differ in estimating the cloudiness factor, emissivity (e), and calculating net longwave radiation (Rnl). All methods use albedo (a) of 0.23 for a reference grass/alfalfa surface. When comparing the performance of all 20 Rn methods with measured Rn, we hypothesized that the a values for grass/alfalfa and non-stressed maize canopy were similar enough to only cause minor differences in Rn and grass- and alfalfa-reference evapotranspiration (ETo and ETr) estimates. The measured seasonal average a for the maize canopy was 0.19 in both years. Using a = 0.19 instead of a = 0.23 resulted in 6% overestimation of Rn. Using a = 0.19 instead of a = 0.23 for ETo and ETr estimations, the 6% difference in Rn translated to only 4% and 3% differences in ETo and ETr, respectively, supporting the validity of our hypothesis. Most methods had good correlations with the ASCE-EWRI Rn (r2 > 0.95). The root mean square difference (RMSD) was less than 2 MJ m-2 d-1 between 12 methods and the ASCE-EWRI Rn at Clay Center and between 14 methods and the ASCE-EWRI Rn at Davis. The performance of some methods showed variations between the two climates. In general, r2 values were higher for the semi-arid climate than for the sub-humid climate. Methods that use dynamic e as a function of mean air temperature performed better in both climates than those that calculate e using actual vapor pressure. The ASCE-EWRI-estimated Rn values had one of the best agreements with the measured Rn (r2 = 0.93, RMSD = 1.44 MJ m-2 d-1), and estimates were within 7% of the measured Rn. The Rn estimates from six methods, including the ASCE-EWRI, were not significantly different from measured Rn. Most methods underestimated measured Rn by 6% to 23%. Some of the differences between measured and estimated Rn were attributed to the poor estimation of Rnl. We conducted sensitivity analyses to evaluate the effect of Rnl on Rn, ETo, and ETr. The Rnl effect on Rn was linear and strong, but its effect on ETo and ETr was subsidiary. Results suggest that the Rn data measured over green vegetation (e.g., irrigated maize canopy) can be an alternative Rn data source for ET estimations when measured Rn data over the reference surface are not available. In the absence of measured Rn, another alternative would be using one of the Rn models that we analyzed when all the input variables are not available to solve the ASCE-EWRI Rn equation. Our results can be used to provide practical information on which method to select based on data availability for reliable estimates of daily Rn in climates similar to Clay Center and Davis.
Resumo:
A general theory is evolved for a class of macrogrowth models which possess two independent growth-rates. Relations connecting growth-rates to growth geometry are established and some new growth forms are shown to result for models with passivation or diffusion-controlled rates. The corresponding potentiostatic responses, their small and large time behaviours and peak characteristics are obtained. Numerical transients are also presented. An empirical equation is derived as a special case and an earlier equation is corrected. An interesting stochastic result pertaining to nucleation events in the successive layers is proved.
Resumo:
The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.
Resumo:
A general derivation of the coupling constant relations which result on embedding a non-simple group like SU L (2) @ U(1) in a larger simple group (or graded Lie group) is given. It is shown that such relations depend only on the requirement (i) that the multiplet of vector fields form an irreducible representation of the unifying algebra and (ii) the transformation properties of the fermions under SU L (2). This point is illustrated in two ways, one by constructing two different unification groups containing the same fermions and therefore have same Weinberg angle; the other by putting different SU L (2) structures on the same fermions and consequently have different Weinberg angles. In particular the value sin~0=3/8 is characteristic of the sequential doublet models or models which invoke a large number of additional leptons like E 6, while addition of extra charged fermion singlets can reduce the value of sin ~ 0 to 1/4. We point out that at the present time the models of grand unification are far from unique.
Resumo:
Purpose.: To develop three-surface paraxial schematic eyes with different ages and sexes based on data for 7- and 14-year-old Chinese children from the Anyang Childhood Eye Study. Methods.: Six sets of paraxial schematic eyes, including 7-year-old eyes, 7-year-old male eyes, 7-year-old female eyes, 14-year-old eyes, 14-year-old male eyes, and 14-year-old female eyes, were developed. Both refraction-dependent and emmetropic eye models were developed, with the former using linear dependence of ocular parameters on refraction. Results.: A total of 2059 grade 1 children (boys 58%) and 1536 grade 8 children (boys 49%) were included, with mean age of 7.1 ± 0.4 and 13.7 ± 0.5 years, respectively. Changes in these schematic eyes with aging are increased anterior chamber depth, decreased lens thickness, increased vitreous chamber depth, increased axial length, and decreased lens equivalent power. Male schematic eyes have deeper anterior chamber depth, longer vitreous chamber depth, longer axial length, and lower lens equivalent power than female schematic eyes. Changes in the schematic eyes with positive increase in refraction are decreased anterior chamber depth, increased lens thickness, decreased vitreous chamber depth, decreased axial length, increased corneal radius of curvature, and increased lens power. In general, the emmetropic schematic eyes have biometric parameters similar to those arising from regression fits for the refraction-dependent schematic eyes. Conclusions.: The paraxial schematic eyes of Chinese children may be useful for myopia research and for facilitating comparison with other children with the same or different racial backgrounds and living in different places.
Resumo:
Objective: To systematically review studies reporting the prevalence in general adult inpatient populations of foot disease disorders (foot wounds, foot infections, collective ‘foot disease’) and risk factors (peripheral arterial disease (PAD), peripheral neuropathy (PN), foot deformity). Methods: A systematic review of studies published between 1980 and 2013 was undertaken using electronic databases (MEDLINE, EMBASE and CINAHL). Keywords and synonyms relating to prevalence, inpatients, foot disease disorders and risk factors were used. Studies reporting foot disease or risk factor prevalence data in general inpatient populations were included. Included study's reference lists and citations were searched and experts consulted to identify additional relevant studies. 2 authors, blinded to each other, assessed the methodological quality of included studies. Applicable data were extracted by 1 author and checked by a second author. Prevalence proportions and SEs were calculated for all included studies. Pooled prevalence estimates were calculated using random-effects models where 3 eligible studies were available. Results: Of the 4972 studies initially identified, 78 studies reporting 84 different cohorts (total 60 231 517 participants) were included. Foot disease prevalence included: foot wounds 0.01–13.5% (70 cohorts), foot infections 0.05–6.4% (7 cohorts), collective foot disease 0.2–11.9% (12 cohorts). Risk factor prevalence included: PAD 0.01–36.0% (10 cohorts), PN 0.003–2.8% (6 cohorts), foot deformity was not reported. Pooled prevalence estimates were only able to be calculated for pressure ulcer-related foot wounds 4.6% (95% CI 3.7% to 5.4%)), diabetes-related foot wounds 2.4% (1.5% to 3.4%), diabetes-related foot infections 3.4% (0.2% to 6.5%), diabetes-related foot disease 4.7% (0.3% to 9.2%). Heterogeneity was high in all pooled estimates (I2=94.2–97.8%, p<0.001). Conclusions: This review found high heterogeneity, yet suggests foot disease was present in 1 in every 20 inpatients and a major risk factor in 1 in 3 inpatients. These findings are likely an underestimate and more robust studies are required to provide more precise estimates.
Resumo:
Background and aims. Type 1 diabetes (T1D), an autoimmune disease in which the insulin producing beta cells are gradually destroyed, is preceded by a prodromal phase characterized by appearance of diabetes-associated autoantibodies in circulation. Both the timing of the appearance of autoantibodies and their quality have been used in the prediction of T1D among first-degree relatives of diabetic patients (FDRs). So far, no general strategies for identifying individuals at increased disease risk in the general population have been established, although the majority of new cases originate in this population. The current work aimed at assessing the predictive role of diabetes-associated immunologic and metabolic risk factors in the general population, and comparing these factors with data obtained from studies on FDRs. Subjects and methods. Study subjects in the current work were subcohorts of participants of the Childhood Diabetes in Finland Study (DiMe; n=755), the Cardiovascular Risk in Young Finns Study (LASERI; n=3475), and the Finnish Type 1 Diabetes Prediction and Prevention Study (DIPP) Study subjects (n=7410). These children were observed for signs of beta-cell autoimmunity and progression to T1D, and the results obtained were compared between the FDRs and the general population cohorts. --- Results and conclusions. By combining HLA and autoantibody screening, T1D risks similar to those reported for autoantibody-positive FDRs are observed in the pediatric general population. Progression rate to T1D is high in genetically susceptible children with persistent multipositivity. Measurement of IAA affinity failed in stratifying the risk assessment in young IAA-positive children with HLA-conferred disease susceptibility, among whom affinity of IAA did not increase during the prediabetic period. Young age at seroconversion, increased weight-for-height, decreased early insulin response, and increased IAA and IA-2A levels predict T1D in young children with genetic disease susceptibility and signs of advanced beta-cell autoimmunity. Since the incidence of T1D continues to increase, efforts aimed at preventing T1D are important, and reliable disease prediction is needed both for intervention trials and for effective and safe preventive therapies in the future. Our observations confirmed that combined HLA-based screening and regular autoantibody measurements reveal similar disease risks in pediatric general population as those seen in prediabetic FDRs, and that risk assessment can be stratified further by studying glucose metabolism of prediabetic subjects. As these screening efforts are feasible in practice, the knowledge now obtained can be exploited while designing intervention trials aimed at secondary prevention of T1D.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
We study quench dynamics and defect production in the Kitaev and the extended Kitaev models. For the Kitaev model in one dimension, we show that in the limit of slow quench rate, the defect density n∼1/√τ, where 1/τ is the quench rate. We also compute the defect correlation function by providing an exact calculation of all independent nonzero spin correlation functions of the model. In two dimensions, where the quench dynamics takes the system across a critical line, we elaborate on the results of earlier work [K. Sengupta, D. Sen, and S. Mondal, Phys. Rev. Lett. 100, 077204 (2008)] to discuss the unconventional scaling of the defect density with the quench rate. In this context, we outline a general proof that for a d-dimensional quantum model, where the quench takes the system through a d−m dimensional gapless (critical) surface characterized by correlation length exponent ν and dynamical critical exponent z, the defect density n∼1/τmν/(zν+1). We also discuss the variation of the shape and spatial extent of the defect correlation function with both the rate of quench and the model parameters and compute the entropy generated during such a quenching process. Finally, we study the defect scaling law, entropy generation and defect correlation function of the two-dimensional extended Kitaev model.