885 resultados para empirical shell model
                                
Resumo:
A new physics-based technique for correcting inhomogeneities present in sub-daily temperature records is proposed. The approach accounts for changes in the sensor-shield characteristics that affect the energy balance dependent on ambient weather conditions (radiation, wind). An empirical model is formulated that reflects the main atmospheric processes and can be used in the correction step of a homogenization procedure. The model accounts for short- and long-wave radiation fluxes (including a snow cover component for albedo calculation) of a measurement system, such as a radiation shield. One part of the flux is further modulated by ventilation. The model requires only cloud cover and wind speed for each day, but detailed site-specific information is necessary. The final model has three free parameters, one of which is a constant offset. The three parameters can be determined, e.g., using the mean offsets for three observation times. The model is developed using the example of the change from the Wild screen to the Stevenson screen in the temperature record of Basel, Switzerland, in 1966. It is evaluated based on parallel measurements of both systems during a sub-period at this location, which were discovered during the writing of this paper. The model can be used in the correction step of homogenization to distribute a known mean step-size to every single measurement, thus providing a reasonable alternative correction procedure for high-resolution historical climate series. It also constitutes an error model, which may be applied, e.g., in data assimilation approaches.
                                
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).
                                
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.
                                
Resumo:
An optimal multiple testing procedure is identified for linear hypotheses under the general linear model, maximizing the expected number of false null hypotheses rejected at any significance level. The optimal procedure depends on the unknown data-generating distribution, but can be consistently estimated. Drawing information together across many hypotheses, the estimated optimal procedure provides an empirical alternative hypothesis by adapting to underlying patterns of departure from the null. Proposed multiple testing procedures based on the empirical alternative are evaluated through simulations and an application to gene expression microarray data. Compared to a standard multiple testing procedure, it is not unusual for use of an empirical alternative hypothesis to increase by 50% or more the number of true positives identified at a given significance level.
                                
Resumo:
The flammability zone boundaries are very important properties to prevent explosions in the process industries. Within the boundaries, a flame or explosion can occur so it is important to understand these boundaries to prevent fires and explosions. Very little work has been reported in the literature to model the flammability zone boundaries. Two boundaries are defined and studied: the upper flammability zone boundary and the lower flammability zone boundary. Three methods are presented to predict the upper and lower flammability zone boundaries: The linear model The extended linear model, and An empirical model The linear model is a thermodynamic model that uses the upper flammability limit (UFL) and lower flammability limit (LFL) to calculate two adiabatic flame temperatures. When the proper assumptions are applied, the linear model can be reduced to the well-known equation yLOC = zyLFL for estimation of the limiting oxygen concentration. The extended linear model attempts to account for the changes in the reactions along the UFL boundary. Finally, the empirical method fits the boundaries with linear equations between the UFL or LFL and the intercept with the oxygen axis. xx Comparison of the models to experimental data of the flammability zone shows that the best model for estimating the flammability zone boundaries is the empirical method. It is shown that is fits the limiting oxygen concentration (LOC), upper oxygen limit (UOL), and the lower oxygen limit (LOL) quite well. The regression coefficient values for the fits to the LOC, UOL, and LOL are 0.672, 0.968, and 0.959, respectively. This is better than the fit of the "zyLFL" method for the LOC in which the regression coefficient’s value is 0.416.
                                
Resumo:
Ocean acidification from the uptake of anthropogenic carbon is simulated for the industrial period and IPCC SRES emission scenarios A2 and B1 with a global coupled carbon cycle-climate model. Earlier studies identified seawater saturation state with respect to aragonite, a mineral phase of calcium carbonate, as a key variable governing impacts on corals and other shell-forming organisms. Globally in the A2 scenario, water saturated by more than 300%, considered suitable for coral growth, vanishes by 2070 AD (CO2≈630 ppm), and the ocean volume fraction occupied by saturated water decreases from 42% to 25% over this century. The largest simulated pH changes worldwide occur in Arctic surface waters, where hydrogen ion concentration increases by up to 185% (ΔpH=−0.45). Projected climate change amplifies the decrease in Arctic surface mean saturation and pH by more than 20%, mainly due to freshening and increased carbon uptake in response to sea ice retreat. Modeled saturation compares well with observation-based estimates along an Arctic transect and simulated changes have been corrected for remaining model-data differences in this region. Aragonite undersaturation in Arctic surface waters is projected to occur locally within a decade and to become more widespread as atmospheric CO2 continues to grow. The results imply that surface waters in the Arctic Ocean will become corrosive to aragonite, with potentially large implications for the marine ecosystem, if anthropogenic carbon emissions are not reduced and atmospheric CO2 not kept below 450 ppm.
                                
Resumo:
The IDA model of cognition is a fully integrated artificial cognitive system reaching across the full spectrum of cognition, from low-level perception/action to high-level reasoning. Extensively based on empirical data, it accurately reflects the full range of cognitive processes found in natural cognitive systems. As a source of plausible explanations for very many cognitive processes, the IDA model provides an ideal tool to think with about how minds work. This online tutorial offers a reasonably full account of the IDA conceptual model, including background material. It also provides a high-level account of the underlying computational “mechanisms of mind” that constitute the IDA computational model.
                                
Resumo:
Plant cell expansion is controlled by a fine-tuned balance between intracellular turgor pressure, cell wall loosening and cell wall biosynthesis. To understand these processes, it is important to gain in-depth knowledge of cell wall mechanics. Pollen tubes are tip-growing cells that provide an ideal system to study mechanical properties at the single cell level. With the available approaches it was not easy to measure important mechanical parameters of pollen tubes, such as the elasticity of the cell wall. We used a cellular force microscope (CFM) to measure the apparent stiffness of lily pollen tubes. In combination with a mechanical model based on the finite element method (FEM), this allowed us to calculate turgor pressure and cell wall elasticity, which we found to be around 0.3 MPa and 20–90 MPa, respectively. Furthermore, and in contrast to previous reports, we showed that the difference in stiffness between the pollen tube tip and the shank can be explained solely by the geometry of the pollen tube. CFM, in combination with an FEM-based model, provides a powerful method to evaluate important mechanical parameters of single, growing cells. Our findings indicate that the cell wall of growing pollen tubes has mechanical properties similar to rubber. This suggests that a fully turgid pollen tube is a relatively stiff, yet flexible cell that can react very quickly to obstacles or attractants by adjusting the direction of growth on its way through the female transmitting tissue.
                                
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
                                
Resumo:
This article seeks to contribute to the illumination of the so-called 'paradox of voting' using the German Bundestag elections of 1998 as an empirical case. Downs' model of voter participation will be extended to include elements of the theory of subjective expected utility (SEU). This will allow a theoretical and empirical exploration of the crucial mechanisms of individual voters' decisions to participate, or abstain from voting, in the German general election of 1998. It will be argued that the infinitely low probability of an individual citizen's vote to decide the election outcome will not necessarily reduce the probability of electoral participation. The empirical analysis is largely based on data from the ALLBUS 1998. It confirms the predictions derived from SEU theory. The voters' expected benefits and their subjective expectation to be able to influence government policy by voting are the crucial mechanisms to explain participation. By contrast, the explanatory contribution of perceived information and opportunity costs is low.
                                
Resumo:
Introduction Prospective memory (PM), the ability to remember to perform intended activities in the future (Kliegel & Jäger, 2007), is crucial to succeed in everyday life. PM seems to improve gradually over the childhood years (Zimmermann & Meier, 2006), but yet little is known about PM competences in young school children in general, and even less is known about factors influencing its development. Currently, a number of studies suggest that executive functions (EF) are potentially influencing processes (Ford, Driscoll, Shum & Macaulay, 2012; Mahy & Moses, 2011). Additionally, metacognitive processes (MC: monitoring and control) are assumed to be involved while optimizing one’s performance (Krebs & Roebers, 2010; 2012; Roebers, Schmid, & Roderer, 2009). Yet, the relations between PM, EF and MC remain relatively unspecified. We intend to empirically examine the structural relations between these constructs. Method A cross-sectional study including 119 2nd graders (mage = 95.03, sdage = 4.82) will be presented. Participants (n = 68 girls) completed three EF tasks (stroop, updating, shifting), a computerised event-based PM task and a MC spelling task. The latent variables PM, EF and MC that were represented by manifest variables deriving from the conducted tasks, were interrelated by structural equation modelling. Results Analyses revealed clear associations between the three cognitive constructs PM, EF and MC (rpm-EF = .45, rpm-MC = .23, ref-MC = .20). A three factor model, as opposed to one or two factor models, appeared to fit excellently to the data (chi2(17, 119) = 18.86, p = .34, remsea = .030, cfi = .990, tli = .978). Discussion The results indicate that already in young elementary school children, PM, EF and MC are empirically well distinguishable, but nevertheless substantially interrelated. PM and EF seem to share a substantial amount of variance while for MC, more unique processes may be assumed.
                                
Resumo:
Given the increasing interest in using social software for company-internal communication and collaboration, this paper examines drivers and inhibitors of micro-blogging adoption at the workplace. While nearly one in two companies is currently planning to introduce social software, there is no empirically validated research on employees’ adoption. In this paper, we build on previous focus group results and test our research model in an empirical study using Structural Equation Modeling. Based on our findings, we derive recommendations on how to foster adoption. We suggest that micro-blogging should be presented to employees as an efficient means of communication, personal brand building, and knowledge management. In order to particularly promote content contribution, privacy concerns should be eased by setting clear rules on who has access to postings and for how long they will be archived.
                                
Resumo:
Unprecedented success of Online Social Networks, such as Facebook, has been recently overshadowed by the privacy risks they imply. Weary of privacy concerns and unable to construct their identity in the desired way, users may restrict or even terminate their platform activities. Even though this means a considerable business risk for these platforms, so far there have been no studies on how to enable social network providers to address these problems. This study fills this gap by adopting a fairness perspective to analyze related measures at the disposal of the provider. In a Structural Equation Model with 237 subjects we find that ensuring interactional and procedural justice are two important strategies to support user participation on the platform.
                                
Resumo:
Empirical data suggest that the race of calving of grounded glaciers terminating in water is directly proportional to the water depth. Important controls on calving may be the extent to which a calving face tends to become oversteepened by differential flow within the ice and the extent to which bending moments promote extrusion and bottom crevassing at the base of a calving face. Numerical modelling suggests that the tendency to become oversteepened increases roughly linearly with water depth. In addition, extending longitudinal deviatoric stresses at the base of a calving face increase with water depth. These processes provide a possible physical explanation for the observed calving-rate/water-depth relation.
                                
Resumo:
Objective: Integrated behavior therapy approaches are defined by the combination of behavioral and or cognitive interventions targeting neurocognition combined with other goal-oriented treatment targets such as social cognition, social skills, or educational issues. The Integrated Psychological Therapy Program (IPT) represents one of the very first behavior therapy approaches combining interventions of neurocognition, social cognition, and social competence. This comprehensive group-based bottom-up and top-down approach consists of five subprograms, each with incremental steps. IPT has been successfully implemented in several countries in Europe, America, Australia and in Asia. IPT worked as a model for some other approaches designed in the USA. IPT was undergone two further developments: based on the social competence part of IPT, the three specific therapy programs focusing residential, occupational or recreational topics were developed. Recently, the cognitive part of INT was rigorously expanded into the Integrated Neurocognitive Therapy (INT) designed exclusively for outpatient treatment: INT includes interventions targeting all neurocognitive and social cognitive domains defined by the NIMH-MATRICS initiative. These group and partially PC-based exercises are structured into four therapy modules, each starting with exercises on neurocognitive domains followed by social cognitive targets. Efficacy: The evidence of integrated therapy approaches and its advantage compared to of one-track interventions was becoming a discussion tool in therapy research as well as in mental health systems. Results of meta-analyses support superiority of integrated approaches compared to one-track interventions in more distal outcome areas such as social functioning. These results are in line with the large body of 37 independent IPT studies in 12 countries. Moreover, IPT research indicates the maintenance of therapy effects after the end of therapy and some evidence generalization effects. Additionally, the international randomized multi-center study on INT with 169 outpatients strongly supports the successful therapy of integrated therapy in proximal and distal outcome such as significant effects in cognition, functioning and negative symptoms. Clinical implication: therapy research as well as expert’s clinical experience recommends integrated therapy approaches such as IPT to be successful agents within multimodal psychiatric treatment concepts. Finally, integrated group therapy based on cognitive remediation seems to motivate and stimulate schizophrenia inpatients and outpatients to more successful and independent life also demanded by the recovery movement.
 
                    