870 resultados para common factor models
Resumo:
Els estudis de supervivència s'interessen pel temps que passa des de l'inici de l'estudi (diagnòstic de la malaltia, inici del tractament,...) fins que es produeix l'esdeveniment d'interès (mort, curació, millora,...). No obstant això, moltes vegades aquest esdeveniment s'observa més d'una vegada en un mateix individu durant el període de seguiment (dades de supervivència multivariant). En aquest cas, és necessari utilitzar una metodologia diferent a la utilitzada en l'anàlisi de supervivència estàndard. El principal problema que l'estudi d'aquest tipus de dades comporta és que les observacions poden no ser independents. Fins ara, aquest problema s'ha solucionat de dues maneres diferents en funció de la variable dependent. Si aquesta variable segueix una distribució de la família exponencial s'utilitzen els models lineals generalitzats mixtes (GLMM); i si aquesta variable és el temps, variable amb una distribució de probabilitat no pertanyent a aquesta família, s'utilitza l'anàlisi de supervivència multivariant. El que es pretén en aquesta tesis és unificar aquests dos enfocs, és a dir, utilitzar una variable dependent que sigui el temps amb agrupacions d'individus o d'observacions, a partir d'un GLMM, amb la finalitat d'introduir nous mètodes pel tractament d'aquest tipus de dades.
Resumo:
Common Loon (Gavia immer) is considered an emblematic and ecologically important example of aquatic-dependent wildlife in North America. The northern breeding range of Common Loon has contracted over the last century as a result of habitat degradation from human disturbance and lakeshore development. We focused on the state of New Hampshire, USA, where a long-term monitoring program conducted by the Loon Preservation Committee has been collecting biological data on Common Loon since 1976. The Common Loon population in New Hampshire is distributed throughout the state across a wide range of lake-specific habitats, water quality conditions, and levels of human disturbance. We used a multiscale approach to evaluate the association of Common Loon and breeding habitat within three natural physiographic ecoregions of New Hampshire. These multiple scales reflect Common Loon-specific extents such as territories, home ranges, and lake-landscape influences. We developed ecoregional multiscale models and compared them to single-scale models to evaluate model performance in distinguishing Common Loon breeding habitat. Based on information-theoretic criteria, there is empirical support for both multiscale and single-scale models across all three ecoregions, warranting a model-averaging approach. Our results suggest that the Common Loon responds to both ecological and anthropogenic factors at multiple scales when selecting breeding sites. These multiscale models can be used to identify and prioritize the conservation of preferred nesting habitat for Common Loon populations.
Resumo:
This workshop paper reports recent developments to a vision system for traffic interpretation which relies extensively on the use of geometrical and scene context. Firstly, a new approach to pose refinement is reported, based on forces derived from prominent image derivatives found close to an initial hypothesis. Secondly, a parameterised vehicle model is reported, able to represent different vehicle classes. This general vehicle model has been fitted to sample data, and subjected to a Principal Component Analysis to create a deformable model of common car types having 6 parameters. We show that the new pose recovery technique is also able to operate on the PCA model, to allow the structure of an initial vehicle hypothesis to be adapted to fit the prevailing context. We report initial experiments with the model, which demonstrate significant improvements to pose recovery.
Resumo:
This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
There is increasing concern about soil enrichment with K+ and subsequent potential losses following long-term application of poor quality water to agricultural land. Different models are increasingly being used for predicting or analyzing water flow and chemical transport in soils and groundwater. The convective-dispersive equation (CDE) and the convective log-normal transfer function (CLT) models were fitted to the potassium (K+) leaching data. The CDE and CLT models produced equivalent goodness of fit. Simulated breakthrough curves for a range of CaCl2 concentration based on parameters of 15 mmol l(-1) CaCl2 were characterised by an early peak position associated with higher K+ concentration as the CaCl2 concentration used in leaching experiments decreased. In another method, the parameters estimated from 15 mmol l(-1) CaCl2 solution were used for all other CaCl2 concentrations, and the best value of retardation factor (R) was optimised for each data set. A better prediction was found. With decreasing CaCl2 concentration the value of R is required to be more than that measured (except for 10 mmol l(-1) CaCl2), if the estimated parameters of 15 mmol l(-1) CaCl2 are used. The two models suffer from the fact that they need to be calibrated against a data set, and some of their parameters are not measurable and cannot be determined independently.
Resumo:
Numerical simulations of magnetic clouds (MCs) propagating through a structured solar wind suggest that MC-associated magnetic flux ropes are highly distorted by inhomogeneities in the ambient medium. In particular, a solar wind configuration of fast wind from high latitudes and slow wind at low latitudes, common at periods close to solar minimum, should distort the cross section of magnetic clouds into concave-outward structures. This phenomenon has been reported in observations of shock front orientations, but not in the body of magnetic clouds. In this study an analytical magnetic cloud model based upon a kinematically distorted flux rope is modified to simulate propagation through a structured medium. This new model is then used to identify specific time series signatures of the resulting concave-outward flux ropes. In situ observations of three well studied magnetic clouds are examined with comparison to the model, but the expected concave-outward signatures are not present. Indeed, the observations are better described by the convex-outward flux rope model. This may be due to a sharp latitudinal transition from fast to slow wind, resulting in a globally concave-outward flux rope, but with convex-outward signatures on a local scale.
Resumo:
This paper assesses the impact of the 'decoupling' reform of the Common Agricultural Policy on the labour allocation decisions of Irish farmers. The agricultural household decision-making model provides the conceptual and theoretical framework to examine the interaction between government subsidies and farmers' time allocation decisions. The relationship postulated is that 'decoupling' of agricultural support from production would probably result in a decline in the return to farm labour but it would also lead to an increase in household wealth. The effect of these factors on how farmers allocate their time is tested empirically using labour participation and labour supply models. The models developed are sufficiently general for application elsewhere. The main findings for the Irish situation are that the decoupling of direct payments is likely to increase the probability of farmers participating in the off-farm employment market and that the amount of time allocated to off-farm work will increase.
Resumo:
This case study on the Sifnos island, Greece, assesses the main factors controlling vegetation succession following crop abandonment and describes the vegetation dynamics of maquis and phrygana formations in relation to alternative theories of secondary succession. Field survey data were collected and analysed at community as well as species level. The results show that vegetation succession on abandoned crop fields is determined by the combined effects of grazing intensity, soil and geological characteristics and time. The analysis determines the quantitative grazing thresholds that modify the successional pathway. Light grazing leads to dominance by maquis vegetation while overgrazing leads to phryganic vegetation. The proposed model shows that vegetation succession following crop abandonment is a complex multi-factor process where the final or the stable stage of the process is not predefined but depends on the factors affecting succession. An example of the use of succession models and disturbance thresholds as a policy assessment tool is presented by evaluating the likely vegetation impacts of the recent reform of the Common Agricultural Policy on Sifnos island over a 20-30-year time horizon. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We know little about the genomic events that led to the advent of a multicellular grade of organization in animals, one of the most dramatic transitions in evolution. Metazoan multicellularity is correlated with the evolution of embryogenesis, which presumably was underpinned by a gene regulatory network reliant on the differential activation of signaling pathways and transcription factors. Many transcription factor genes that play critical roles in bilaterian development largely appear to have evolved before the divergence of cnidarian and bilaterian lineages. In contrast, sponges seem to have a more limited suite of transcription factors, suggesting that the developmental regulatory gene repertoire changed markedly during early metazoan evolution. Using whole- genome information from the sponge Amphimedon queenslandica, a range of eumetazoans, and the choanoflagellate Monosiga brevicollis, we investigate the genesis and expansion of homeobox, Sox, T- box, and Fox transcription factor genes. Comparative analyses reveal that novel transcription factor domains ( such as Paired, POU, and T- box) arose very early in metazoan evolution, prior to the separation of extant metazoan phyla but after the divergence of choanoflagellate and metazoan lineages. Phylogenetic analyses indicate that transcription factor classes then gradually expanded at the base of Metazoa before the bilaterian radiation, with each class following a different evolutionary trajectory. Based on the limited number of transcription factors in the Amphimedon genome, we infer that the genome of the metazoan last common ancestor included fewer gene members in each class than are present in extant eumetazoans. Transcription factor orthologues present in sponge, cnidarian, and bilaterian genomes may represent part of the core metazoan regulatory network underlying the origin of animal development and multicellularity.
Resumo:
We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the data. Routine use of mixture models alongside other approaches to phylogenetic inference may often reveal hidden or unexpected patterns of sequence evolution and can improve phylogenetic inference.
Resumo:
Combinations of drugs are increasingly being used for a wide variety of diseases and conditions. A pre-clinical study may allow the investigation of the response at a large number of dose combinations. In determining the response to a drug combination, interest may lie in seeking evidence of synergism, in which the joint action is greater than the actions of the individual drugs, or of antagonism, in which it is less. Two well-known response surface models representing no interaction are Loewe additivity and Bliss independence, and Loewe or Bliss synergism or antagonism is defined relative to these. We illustrate an approach to fitting these models for the case in which the marginal single drug dose-response relationships are represented by four-parameter logistic curves with common upper and lower limits, and where the response variable is normally distributed with a common variance about the dose-response curve. When the dose-response curves are not parallel, the relative potency of the two drugs varies according to the magnitude of the desired effect and the models for Loewe additivity and synergism/antagonism cannot be explicitly expressed. We present an iterative approach to fitting these models without the assumption of parallel dose-response curves. A goodness-of-fit test based on residuals is also described. Implementation using the SAS NLIN procedure is illustrated using data from a pre-clinical study. Copyright © 2007 John Wiley & Sons, Ltd.
Resumo:
The endostyle of invertebrate chordates is a pharyngeal organ that is thought to be homologous with the follicular thyroid of vertebrates. Although thyroid-like features such as iodine-concentrating and peroxidase activities are located in the dorsolateral part of both ascidian and amphioxus endostyles, the structural organization and numbers of functional units are different. To estimate phylogenetic relationships of each functional zone with special reference to the evolution of the thyroid, we have investigated, in ascidian and amphioxus, the expression patterns of thyroid-related transcription factors such as TTF-2/MoxE4 and Pax2/5/8, as well as the forkhead transcription factors FoxQ1 and FoxA. Comparative gene expression analyses depicted an overall similarity between ascidians and amphioxus endostyles, while differences in expression patterns of these genes might be specifically related to the addition or elimination of a pair of glandular zones. Expressions of Ci-FoxE and BbFoxE4 suggest that the ancestral FoxE class might have been recruited for the formation of thyroid-like region in a possible common ancestor of chordates. Furthermore, coexpression of FoxE4, Pax2/5/8, and TPO in the dorsolateral part of both ascidian and amphioxus endostyles suggests that genetic basis of the thyroid function was already in place before the vertebrate lineage. (c) 2005 Wiley-Liss, Inc.
Resumo:
Experimental data for the title reaction were modeled using master equation (ME)/RRKM methods based on the Multiwell suite of programs. The starting point for the exercise was the empirical fitting provided by the NASA (Sander, S. P.; Finlayson-Pitts, B. J.; Friedl, R. R.; Golden, D. M.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Molina, M. J.; Moortgat, G. K.; Orkin, V. L.; Ravishankara, A. R. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies, Evaluation Number 15; Jet Propulsion Laboratory: Pasadena, California, 2006)(1) and IUPAC (Atkinson, R.; Baulch, D. L.; Cox, R. A.: R. F. Hampson, J.; Kerr, J. A.; Rossi, M. J.; Troe, J. J. Phys. Chem. Ref. Data. 2000, 29, 167) 2 data evaluation panels, which represents the data in the experimental pressure ranges rather well. Despite the availability of quite reliable parameters for these calculations (molecular vibrational frequencies (Parthiban, S.; Lee, T. J. J. Chem. Phys. 2000, 113, 145)3 and a. value (Orlando, J. J.; Tyndall, G. S. J. Phys. Chem. 1996, 100,. 19398)4 of the bond dissociation energy, D-298(BrO-NO2) = 118 kJ mol(-1), corresponding to Delta H-0(circle) = 114.3 kJ mol(-1) at 0 K) and the use of RRKM/ME methods, fitting calculations to the reported data or the empirical equations was anything but straightforward. Using these molecular parameters resulted in a discrepancy between the calculations and the database of rate constants of a factor of ca. 4 at, or close to, the low-pressure limit. Agreement between calculation and experiment could be achieved in two ways, either by increasing Delta H-0(circle) to an unrealistically high value (149.3 kJ mol(-1)) or by increasing
Resumo:
The existing literature on lean construction is overwhelmingly prescriptive with little recognition of the social and politicised nature of the diffusion process. The prevailing production-engineering perspective too often assumes that organizations are unitary entities where all parties strive for the common goal of 'improved performance'. An alternative perspective is developed that considers the diffusion of lean construction across contested pluralistic arenas. Different actors mobilize different storylines to suit their own localized political agendas. Multiple storylines of lean construction continuously compete for attention with other management fashions. The conceptualization and enactment of lean construction therefore differs across contexts, often taking on different manifestations from those envisaged. However, such localized enactments of lean construction are patterned and conditioned by pre-existing social and economic structures over which individual managers have limited influence. Taking a broader view, 'leanness' can be conceptualized in terms of a quest for structural flexibility involving restructuring, downsizing and outsourcing. From this perspective, the UK construction industry can be seen to have embarked upon leaner ways of working in the mid-1970s, long before the terminology of lean thinking came into vogue. Semi-structured interviews with construction sector policy-makers provide empirical support for the view that lean construction is a multifaceted concept that defies universal definition.