948 resultados para Single Equation Models
Resumo:
Two main approaches are commonly used to empirically evaluate linear factor pricingmodels: regression and SDF methods, with centred and uncentred versions of the latter.We show that unlike standard two-step or iterated GMM procedures, single-step estimatorssuch as continuously updated GMM yield numerically identical values for prices of risk,pricing errors, Jensen s alphas and overidentifying restrictions tests irrespective of the modelvalidity. Therefore, there is arguably a single approach regardless of the factors being tradedor not, or the use of excess or gross returns. We illustrate our results by revisiting Lustigand Verdelhan s (2007) empirical analysis of currency returns.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
When dealing with the design of service networks, such as healthand EMS services, banking or distributed ticket selling services, thelocation of service centers has a strong influence on the congestion ateach of them, and consequently, on the quality of service. In this paper,several models are presented to consider service congestion. The firstmodel addresses the issue of the location of the least number of single--servercenters such that all the population is served within a standard distance,and nobody stands in line for a time longer than a given time--limit, or withmore than a predetermined number of other clients. We then formulateseveral maximal coverage models, with one or more servers per service center.A new heuristic is developed to solve the models and tested in a 30--nodesnetwork.
Resumo:
Eurymetopum is an Andean clerid genus with 22 species. We modeled the ecological niches of 19 species with Maxent and used them as potential distributional maps to identify patterns of richness and endemicity. All modeled species maps were overlapped in a single map in order to determine richness. We performed an optimality analysis with NDM/VNDM in a grid of 1º latitude-longitude in order to identify endemism. We found a highly rich area, located between 32º and 41º south latitude, where the richest pixels have 16 species. One area of endemism was identified, located in the Maule and Valdivian Forest biogeographic provinces, which extends also to the Santiago province of the Central Chilean subregion, and contains four endemic species (E. parallelum, E. prasinum, E. proteus, and E. viride), as well as 16 non-endemic species. The sympatry of these phylogenetically unrelated species might indicate ancient vicariance processes, followed by episodes of dispersal. Based on our results, we suggest a close relationship between these provinces, with the Maule representing a complex area.
Resumo:
OBJECTIVE: Whole-body vibration (WBV) exercise is progressively adopted as an alternative therapeutic modality for enhancing muscle force and muscle activity via neurogenic potentiation. So far, possible changes in the recruitment patterns of the trunk musculature after WBV remain undetermined. The main objective of this study was to evaluate the short-term effects of a single WBV session on trunk neuromuscular responses in patients with chronic low back pain (cLBP) and healthy participants. METHODS: Twenty patients with cLBP and 21 healthy participants performed 10 trunk flexion-extensions before and after a single WBV session consisting of five 1-minute vibration sets. Surface electromyography (EMG) of erector spinae at L2-L3 and L4-L5 and lumbopelvic kinematic variables were collected during the trials. Data were analyzed using 2-way mixed analysis of variance models. RESULTS: The WBV session led to increased lumbar EMG activity during the flexion and extension phases but yielded no change in the quiet standing and fully flexed phases. Kinematic data showed a decreased contribution to the movement of the lumbar region in the second extension quartile. These effects were not different between patients with cLBP and healthy participants. CONCLUSIONS: Increased lumbar EMG activity after a single WBV session most probably results from potentiation effects of WBV on lumbar muscles reflex responses. Decreased EMG activity in full trunk flexion, usually observed in healthy individuals, was still present after WBV, suggesting that the ability of the spine stabilizing mechanisms to transfer the extension torque from muscles to passive structures was not affected.
Resumo:
Aim The imperfect detection of species may lead to erroneous conclusions about species-environment relationships. Accuracy in species detection usually requires temporal replication at sampling sites, a time-consuming and costly monitoring scheme. Here, we applied a lower-cost alternative based on a double-sampling approach to incorporate the reliability of species detection into regression-based species distribution modelling.Location Doñana National Park (south-western Spain).Methods Using species-specific monthly detection probabilities, we estimated the detection reliability as the probability of having detected the species given the species-specific survey time. Such reliability estimates were used to account explicitly for data uncertainty by weighting each absence. We illustrated how this novel framework can be used to evaluate four competing hypotheses as to what constitutes primary environmental control of amphibian distribution: breeding habitat, aestivating habitat, spatial distribution of surrounding habitats and/or major ecosystems zonation. The study was conducted on six pond-breeding amphibian species during a 4-year period.Results Non-detections should not be considered equivalent to real absences, as their reliability varied considerably. The occurrence of Hyla meridionalis and Triturus pygmaeus was related to a particular major ecosystem of the study area, where suitable habitat for these species seemed to be widely available. Characteristics of the breeding habitat (area and hydroperiod) were of high importance for the occurrence of Pelobates cultripes and Pleurodeles waltl. Terrestrial characteristics were the most important predictors of the occurrence of Discoglossus galganoi and Lissotriton boscai, along with spatial distribution of breeding habitats for the last species.Main conclusions We did not find a single best supported hypothesis valid for all species, which stresses the importance of multiscale and multifactor approaches. More importantly, this study shows that estimating the reliability of non-detection records, an exercise that had been previously seen as a naïve goal in species distribution modelling, is feasible and could be promoted in future studies, at least in comparable systems.
Resumo:
Neuroimaging studies typically compare experimental conditions using average brain responses, thereby overlooking the stimulus-related information conveyed by distributed spatio-temporal patterns of single-trial responses. Here, we take advantage of this rich information at a single-trial level to decode stimulus-related signals in two event-related potential (ERP) studies. Our method models the statistical distribution of the voltage topographies with a Gaussian Mixture Model (GMM), which reduces the dataset to a number of representative voltage topographies. The degree of presence of these topographies across trials at specific latencies is then used to classify experimental conditions. We tested the algorithm using a cross-validation procedure in two independent EEG datasets. In the first ERP study, we classified left- versus right-hemifield checkerboard stimuli for upper and lower visual hemifields. In a second ERP study, when functional differences cannot be assumed, we classified initial versus repeated presentations of visual objects. With minimal a priori information, the GMM model provides neurophysiologically interpretable features - vis à vis voltage topographies - as well as dynamic information about brain function. This method can in principle be applied to any ERP dataset testing the functional relevance of specific time periods for stimulus processing, the predictability of subject's behavior and cognitive states, and the discrimination between healthy and clinical populations.
Resumo:
Contamination of weather radar echoes by anomalous propagation (anaprop) mechanisms remains a serious issue in quality control of radar precipitation estimates. Although significant progress has been made identifying clutter due to anaprop there is no unique method that solves the question of data reliability without removing genuine data. The work described here relates to the development of a software application that uses a numerical weather prediction (NWP) model to obtain the temperature, humidity and pressure fields to calculate the three dimensional structure of the atmospheric refractive index structure, from which a physically based prediction of the incidence of clutter can be made. This technique can be used in conjunction with existing methods for clutter removal by modifying parameters of detectors or filters according to the physical evidence for anomalous propagation conditions. The parabolic equation method (PEM) is a well established technique for solving the equations for beam propagation in a non-uniformly stratified atmosphere, but although intrinsically very efficient, is not sufficiently fast to be practicable for near real-time modelling of clutter over the entire area observed by a typical weather radar. We demonstrate a fast hybrid PEM technique that is capable of providing acceptable results in conjunction with a high-resolution terrain elevation model, using a standard desktop personal computer. We discuss the performance of the method and approaches for the improvement of the model profiles in the lowest levels of the troposphere.
Resumo:
In recent years there has been growing interest in the question of how the particular topology of polymeric chains affects their overall dimensions and physical behavior. The majority of relevant studies are based on numerical simulation methods or analytical treatment; however, both these approaches depend on various assumptions and simplifications. Experimental verification is clearly needed but was hampered by practical difficulties in obtaining preparative amounts of knotted or catenated polymers with predefined topology and precisely set chain length. We introduce here an efficient method of production of various single-stranded DNA knots and catenanes that have the same global chain length. We also characterize electrophoretic migration of the produced single-stranded DNA knots and catenanes with increasing complexity.
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
We report on experiments aimed at comparing the hysteretic response of a Cu-Zn-Al single crystal undergoing a martensitic transition under strain-driven and stress-driven conditions. Strain-driven experiments were performed using a conventional tensile machine while a special device was designed to perform stress-driven experiments. Significant differences in the hysteresis loops were found. The strain-driven curves show reentrant behavior yield point which is not observed in the stress-driven case. The dissipated energy in the stress-driven curves is larger than in the strain-driven ones. Results from recently proposed models qualitatively agree with experiments.
Design and Evaluation of a Single-Span Bridge Using Ultra- High Performance Concrete, September 2009
Resumo:
Research presented herein describes an application of a newly developed material called Ultra-High Performance Concrete (UHPC) to a single-span bridge. The two primary objectives of this research were to develop a shear design procedure for possible code adoption and to provide a performance evaluation to ensure the viability of the first UHPC bridge in the United States. Two other secondary objectives included defining of material properties and understanding of flexural behavior of a UHPC bridge girder. In order to obtain information in these areas, several tests were carried out including material testing, large-scale laboratory flexure testing, large-scale laboratory shear testing, large-scale laboratory flexure-shear testing, small-scale laboratory shear testing, and field testing of a UHPC bridge. Experimental and analytical results of the described tests are presented. Analytical models to understand the flexure and shear behavior of UHPC members were developed using iterative computer based procedures. Previous research is referenced explaining a simplified flexural design procedure and a simplified pure shear design procedure. This work describes a shear design procedure based on the Modified Compression Field Theory (MCFT) which can be used in the design of UHPC members. Conclusions are provided regarding the viability of the UHPC bridge and recommendations are made for future research.
Resumo:
We deal with the hysteretic behavior of partial cycles in the two¿phase region associated with the martensitic transformation of shape¿memory alloys. We consider the problem from a thermodynamic point of view and adopt a local equilibrium formalism, based on the idea of thermoelastic balance, from which a formal writing follows a state equation for the material in terms of its temperature T, external applied stress ¿, and transformed volume fraction x. To describe the striking memory properties exhibited by partial transformation cycles, state variables (x,¿,T) corresponding to the current state of the system have to be supplemented with variables (x,¿,T) corresponding to points where the transformation control parameter (¿¿ and/or T) had reached a maximum or a minimum in the previous thermodynamic history of the system. We restrict our study to simple partial cycles resulting from a single maximum or minimum of the control parameter. Several common features displayed by such partial cycles and repeatedly observed in experiments lead to a set of analytic restrictions, listed explicitly in the paper, to be verified by the dissipative term of the state equation, responsible for hysteresis. Finally, using calorimetric data of thermally induced partial cycles through the martensitic transformation in a Cu¿Zn¿Al alloy, we have fitted a given functional form of the dissipative term consistent with the analytic restrictions mentioned above.
Resumo:
We derive nonlinear diffusion equations and equations containing corrections due to fluctuations for a coarse-grained concentration field. To deal with diffusion coefficients with an explicit dependence on the concentration values, we generalize the Van Kampen method of expansion of the master equation to field variables. We apply these results to the derivation of equations of phase-separation dynamics and interfacial growth instabilities.