896 resultados para New career models
Resumo:
Inspired by the commercial desires of global brands and retailers to access the lucrative green consumer market, carbon is increasingly being counted and made knowable at the mundane sites of everyday production and consumption, from the carbon footprint of a plastic kitchen fork to that of an online bank account. Despite the challenges of counting and making commensurable the global warming impact of a myriad of biophysical and societal activities, this desire to communicate a product or service's carbon footprint has sparked complicated carbon calculative practices and enrolled actors at literally every node of multi-scaled and vastly complex global supply chains. Against this landscape, this paper critically analyzes the counting practices that create the ‘e’ in ‘CO2e’. It is shown that, central to these practices are a series of tools, models and databases which, in building upon previous work (Eden, 2012 and Star and Griesemer, 1989) we conceptualize here as ‘boundary objects’. By enrolling everyday actors from farmers to consumers, these objects abstract and stabilize greenhouse gas emissions from their messy material and social contexts into units of CO2e which can then be translated along a product's supply chain, thereby establishing a new currency of ‘everyday supply chain carbon’. However, in making all greenhouse gas-related practices commensurable and in enrolling and stabilizing the transfer of information between multiple actors these objects oversee a process of simplification reliant upon, and subject to, a multiplicity of approximations, assumptions, errors, discrepancies and/or omissions. Further the outcomes of these tools are subject to the politicized and commercial agendas of the worlds they attempt to link, with each boundary actor inscribing different meanings to a product's carbon footprint in accordance with their specific subjectivities, commercial desires and epistemic framings. It is therefore shown that how a boundary object transforms greenhouse gas emissions into units of CO2e, is the outcome of distinct ideologies regarding ‘what’ a product's carbon footprint is and how it should be made legible. These politicized decisions, in turn, inform specific reduction activities and ultimately advance distinct, specific and increasingly durable transition pathways to a low carbon society.
Resumo:
The aim of this study was to assess and improve the accuracy of biotransfer models for the organic pollutants (PCBs, PCDD/Fs, PBDEs, PFCAs, and pesticides) into cow’s milk and beef used in human exposure assessment. Metabolic rate in cattle is known as a key parameter for this biotransfer, however few experimental data and no simulation methods are currently available. In this research, metabolic rate was estimated using existing QSAR biodegradation models of microorganisms (BioWIN) and fish (EPI-HL and IFS-HL). This simulated metabolic rate was then incorporated into the mechanistic cattle biotransfer models (RAIDAR, ACC-HUMAN, OMEGA, and CKow). The goodness of fit tests showed that RAIDAR, ACC-HUMAN, OMEGA model performances were significantly improved using either of the QSARs when comparing the new model outputs to observed data. The CKow model is the only one that separates the processes in the gut and liver. This model showed the lowest residual error of all the models tested when the BioWIN model was used to represent the ruminant metabolic process in the gut and the two fish QSARs were used to represent the metabolic process in the liver. Our testing included EUSES and CalTOX which are KOW-regression models that are widely used in regulatory assessment. New regressions based on the simulated rate of the two metabolic processes are also proposed as an alternative to KOW-regression models for a screening risk assessment. The modified CKow model is more physiologically realistic, but has equivalent usability to existing KOW-regression models for estimating cattle biotransfer of organic pollutants.
Resumo:
Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.
Resumo:
Academic writing has a tendency to be turgid and impenetrable. This is not only anathema to communication between academics, but also a major barrier to advancing construction industry development. Clarity in our communication is a prerequisite to effective collaboration with industry. An exploration of what it means to be an academic in a University is presented in order to provide a context for a discussion on how academics might collaborate with industry to advance development. There are conflicting agendas that pull the academic in different directions: peer group recognition, institutional success and industry development. None can be achieved without the other, which results in the need for a careful balancing act. While academics search for better understandings and provisional explanations within the context of conceptual models, industry seeks the practical application of new ideas, whether the ideas come from research or experience. Universities have a key role to play in industry development and in economic development.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Resumo:
Objective: To develop yardsticks for assessment of dental arch relationship in young individuals with repaired complete bilateral cleft lip and palate appropriate to different stages of dental development. Participants: Eleven cleft team orthodontists from five countries worked on the projects for 4 days. A total of 776 sets of standardized plaster models from 411 patients with operated complete bilateral cleft lip and palate were available for the exercise. Statistics: The interexaminer reliability was calculated using weighted kappa statistics. Results: The interrater weighted kappa scores were between .74 and .92, which is in the ""good"" to ""very good"" categories. Conclusions: Three bilateral cleft lip and palate yardsticks for different developmental stages of the dentition were made: one for the deciduous dentition (6-year-olds` yardstick), one for early mixed dentition (9-year-olds` yardstick), and one for early permanent dentition (12-year-olds` yardstick).
Resumo:
We present here new results of two-dimensional hydrodynamical simulations of the eruptive events of the 1840s (the great) and the 1890s (the minor) eruptions suffered by the massive star eta Carinae (Car). The two bipolar nebulae commonly known as the Homunculus and the little Homunculus (LH) were formed from the interaction of these eruptive events with the underlying stellar wind. We assume here an interacting, non-spherical multiple-phase wind scenario to explain the shape and the kinematics of both Homunculi, but adopt a more realistic parametrization of the phases of the wind. During the 1890s eruptive event, the outflow speed decreased for a short period of time. This fact suggests that the LH is formed when the eruption ends, from the impact of the post-outburst eta Car wind (that follows the 1890s event) with the eruptive flow (rather than by the collision of the eruptive flow with the pre-outburst wind, as claimed in previous models; Gonzalez et al.). Our simulations reproduce quite well the shape and the observed expansion speed of the large Homunculus. The LH (which is embedded within the large Homunculus) becomes Rayleigh-Taylor unstable and develop filamentary structures that resemble the spatial features observed in the polar caps. In addition, we find that the interior cavity between the two Homunculi is partially filled by material that is expelled during the decades following the great eruption. This result may be connected with the observed double-shell structure in the polar lobes of the eta Car nebula. Finally, as in previous work, we find the formation of tenuous, equatorial, high-speed features that seem to be related to the observed equatorial skirt of eta Car.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
Upper-mantle seismic anisotropy has been extensively used to infer both present and past deformation processes at lithospheric and asthenospheric depths. Analysis of shear-wave splitting (mainly from core-refracted SKS phases) provides information regarding upper-mantle anisotropy. We present average measurements of fast-polarization directions at 21 new sites in poorly sampled regions of intra-plate South America, such as northern and northeastern Brazil. Despite sparse data coverage for the South American stable platform, consistent orientations are observed over hundreds of kilometers. Over most of the continent, the fast-polarization direction tends to be close to the absolute plate motion direction given by the hotspot reference model HS3-NUVEL-1A. A previous global comparison of the SKS fast-polarization directions with flow models of the upper mantle showed relatively poor correlation on the continents, which was interpreted as evidence for a large contribution of ""frozen"" anisotropy in the lithosphere. For the South American plate, our data indicate that one of the reasons for the poor correlation may have been the relatively coarse model of lithospheric thicknesses. We suggest that improved models of upper-mantle flow that are based on more detailed lithospheric thicknesses in South America may help to explain most of the observed anisotropy patterns.
Resumo:
Early American crania show a different morphological pattern from the one shared by late Native Americans. Although the origin of the diachronic morphological diversity seen on the continents is still debated, the distinct morphology of early Americans is well documented and widely dispersed. This morphology has been described extensively for South America, where larger samples are available. Here we test the hypotheses that the morphology of Early Americans results from retention of the morphological pattern of Late Pleistocene modern humans and that the occupation of the New World precedes the morphological differentiation that gave rise to recent Eurasian and American morphology. We compare Early American samples with European Upper Paleolithic skulls, the East Asian Zhoukoudian Upper Cave specimens and a series of 20 modern human reference crania. Canonical Analysis and Minimum Spanning Tree were used to assess the morphological affinities among the series, while Mantel and Dow-Cheverud tests based on Mahalanobis Squared Distances were used to test different evolutionary scenarios. Our results show strong morphological affinities among the early series irrespective of geographical origin, which together with the matrix analyses results favor the scenario of a late morphological differentiation of modern humans. We conclude that the geographic differentiation of modern human morphology is a late phenomenon that occurred after the initial settlement of the Americas. Am J Phys Anthropol 144:442-453, 2011. (c) 2010 Wiley-Liss, Inc.
Resumo:
Leiopelma hochstetteri is an endangered New Zealand frog now confined to isolated populations scattered across the North Island. A better understanding of its past, current and predicted future environmental suitability will contribute to its conservation which is in jeopardy due to human activities, feral predators, disease and climate change. Here we use ecological niche modelling with all known occurrence data (N = 1708) and six determinant environmental variables to elucidate current, pre-human and future environmental suitability of this species. Comparison among independent runs, subfossil records and a clamping method allow validation of models. Many areas identified as currently suitable do not host any known populations. This apparent discrepancy could be explained by several non exclusive hypotheses: the areas have not been adequately surveyed and undiscovered populations still remain, the model is over simplistic; the species` sensitivity to fragmentation and small population size; biotic interactions; historical events. An additional outcome is that apparently suitable, but frog-less areas could be targeted for future translocations. Surprisingly, pre-human conditions do not differ markedly highlighting the possibility that the range of the species was broadly fragmented before human arrival. Nevertheless, some populations, particularly on the west of the North Island may have disappeared as a result of human mediated habitat modification. Future conditions are marked with higher temperatures, which are predicted to be favourable to the species. However, such virtual gain in suitable range will probably not benefit the species given the highly fragmented nature of existing habitat and the low dispersal ability of this species. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this article, we present a generalization of the Bayesian methodology introduced by Cepeda and Gamerman (2001) for modeling variance heterogeneity in normal regression models where we have orthogonality between mean and variance parameters to the general case considering both linear and highly nonlinear regression models. Under the Bayesian paradigm, we use MCMC methods to simulate samples for the joint posterior distribution. We illustrate this algorithm considering a simulated data set and also considering a real data set related to school attendance rate for children in Colombia. Finally, we present some extensions of the proposed MCMC algorithm.
Resumo:
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.
Resumo:
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model, Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin er al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214-223]. The usefulness of these models is illustrated in a Simulation study and in applications to three real data sets. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
For the first time, we introduce a class of transformed symmetric models to extend the Box and Cox models to more general symmetric models. The new class of models includes all symmetric continuous distributions with a possible non-linear structure for the mean and enables the fitting of a wide range of models to several data types. The proposed methods offer more flexible alternatives to Box-Cox or other existing procedures. We derive a very simple iterative process for fitting these models by maximum likelihood, whereas a direct unconditional maximization would be more difficult. We give simple formulae to estimate the parameter that indexes the transformation of the response variable and the moments of the original dependent variable which generalize previous published results. We discuss inference on the model parameters. The usefulness of the new class of models is illustrated in one application to a real dataset.